If you’ve read any of my previous posts you know that I am constantly experimenting with different ways to represent and explore social network data with R. For example, in previous posts I’ve written about sonification of tweet data, animation of dynamic twitter networks, and various ways to plot social networks (here and here). In each case the underlying idea is finding different ways to explore data under the assumption that sometimes just looking at something from a different point of view reveals something novel. In this post I will briefly discuss how to go from data to 3D model network, to 3D object using R most of the way.Read More
In the case of social media researchers, the situation today is that much of our research is like the flight data recorder: we collect, store, and report data and analyses, but we follow the dictum on the outside and “do not open” the box.
We’re discovering this is a mistake.
By keeping the black box closed, we can create a misleading impression when we report our research results. We inhibit others from replicating our findings or testing the limits of our results if we do not fully disclose the details of our processes. We may also miss the chance to ask research questions if we ignore the opportunities to explore the data by testing the sensitivity of our findings to changes in our research procedures. There are some things we can do from the outside—approaches borrowed from systems theory and systems analysis approaches—but all of us will improve our research as we make our methods more visible…as we open up the black box.
Let’s look at some examples. In conducting research with social media data, it’s helpful to think about the sequential ELT steps in data warehousing systems. In following these steps, we: Extract (data from streams or sources), Transform (the data by parsing it and including metadata that enable us to address our research questions), and Load (the transformed data into an accessible dataset). And these are just the first steps—before we begin our analysis. At each step, small variations in the procedures or rules we use can result in significant shifts to our later findings, to the questions we are capable of answering, and even to questions we can imagine asking. For example, suppose we want to do an analysis of Twitter messages. In extracting Twitter data, do we use the Twitter API? If so, do we collect the data in real time (streaming API) or do we employ queries (search API), getting some retrospective tweets? If we opt not to use the API, we could use one of several developer-based or commercial services (e.g., Gnip) to get our data, but can we afford it? Each may have advantages, but the samples that result from each may be different. If the samples differ, can we be confident in our research results in each case?Read More
The name itself evokes images of tropical sun, warm waters, surfing, and relaxation.
So what are the people in this image doing inside, intent on looking at computer screens? Instead of savoring the sunshine and walking on the sand, here they sit. Inside. Hunched over laptops. Interpreting a series of instructions to make sense of social media data. Listening to the SoMe Lab team explain what they are seeing. They are not behaving as you imagine Hawaiian visitors would behave.
These dedicated researchers are taking part in the workshop organized by the SoMe Lab team at HICSS46, held this past January in Wailea on Maui. As a part of the workshop, they were hearing from the SoMe team about lessons the team has learned in the past fifteen months.Read More
A couple of weeks ago Bob wrote about a post about a research note that was recently accepted to the iConference. In it we outline the beginnings of a research project where we look at the interaction of different media platforms (Twitter and Blogs) with more traditional sources. In this post I go through the R code we used to plot, and visually compare, the volume of different information sources.
The data for this example is randomly drawn along a Pareto distribution so anyone should be able to just open the file, run it and have plots. Like I did in the last R example, I have used comments in the code to explain what I’m doing in the creation of these plots. After the code I give a brief introduction on the tool I use to select colors.Read More
Recognizing patterns and rhythms in social media data
Wayne Gretsky is quoted as saying that a great hockey player plays where “the puck is going to be,” not where it is. Gretsky, like the great NBA point guards (think Magic Johnson or Mark Price), was quick to detect emerging patterns in movement and flows–then take advantage of what was about to happen. In our research efforts, we often try to detect patterns in order to explore what these patterns may tell us about underlying processes.
The SoMe Lab is examining patterns in the movement and flows of information between and among social media platforms. We observe that traditional media news may inform or trigger information exchanges in the blogosphere or Twitter; and vice versa. We want to look closely at these patterns to gain insights into phenomena such as virality, the birth and life cycle of interest networks, and the dynamics of a fluid cast of gatekeepers. The accompanying image illustrates the patterns that distinguish the volume of tweets, blog posts, and traditional news items following the pepper spraying incident at UC-Davis November 18, 2011.
This article lists the steps I take to create a network animation in R, provides some example source code that you can copy and modify for your own work, and starts a discussion about programming and visualization as an interpretive approach in research. Before I start, take a look at this network animation created with R and the iGraph package. This animation is of a retweet network related to #BankTransferDay. Links (displayed as lines) are retweets, nodes (displayed as points) are user accounts. For each designated period of time (in this case, an hour), retweets are drawn and then fade out over 24 hours.Read More