Black Boxes, Systems, and Social Media Studies

Posted by on Mar 16, 2013 in Data Visualization, Information Visualization, Methods, Modeling, Research | 0 comments

black_boxBlack box:  it’s generally what we call the flight data recorder that we all know about…but did you know it’s really bright orange in color (so it can be found easily)?

In the case of social media researchers, the situation today is that much of our research is like the flight data recorder: we collect, store, and report data and analyses, but we follow the dictum on the outside and “do not open” the box.

We’re discovering this is a mistake.

By keeping the black box closed, we can create a misleading impression when we report our research results.  We inhibit others from replicating our findings or testing the limits of our results if we do not fully disclose the details of our processes.  We may also miss the chance to ask research questions if we ignore the opportunities to explore the data by testing the sensitivity of our findings to changes in our research procedures.   There are some things we can do from the outside—approaches borrowed from systems theory and systems analysis approaches—but all of us will improve our research as we make our methods more visible…as we open up the black box.

Let’s look at some examples.  In conducting research with social media data, it’s helpful to think about the sequential ELT steps in data warehousing systems.  In following these steps, we:  Extract (data from streams or sources), Transform (the data by parsing it and including metadata that enable us to address our research questions), and Load (the transformed data into an accessible dataset).  And these are just the first steps—before we begin our analysis.  At each step, small variations in the procedures or rules we use can result in significant shifts to our later findings, to the questions we are capable of answering, and even to questions we can imagine asking.  For example, suppose we want to do an analysis of Twitter messages.  In extracting Twitter data, do we use the Twitter API?  If so, do we collect the data in real time (streaming API) or do we employ queries (search API), getting some retrospective tweets?  If we opt not to use the API, we could use one of several developer-based or commercial services (e.g., Gnip) to get our data, but can we afford it?  Each may have advantages, but the samples that result from each may be different.   If the samples differ, can we be confident in our research results in each case?

Read More

Walking the data with Certeau and topic modeling

Posted by on Feb 20, 2013 in Uncategorized | 3 comments

binary_dataIn The Practice of Everyday Life, Certeau describes the process of “walking the city,” noting that the ways in which people experience the city are qualitatively different than what urban planners and sociologists are capable of measuring.  I argue that this process of “walking a space” can be applied to the spaces of social media as well, particularly in regard to the spaces of discourse created by emergent hashtags.  I’m also playing with MALLET, a tool for Latent Dirichlet Allocation (LDA) topic modeling for “big data” texts.  I’m just getting started in the process of learning some of the computational tools needed for performing these “distant readings,” but already I’ve discovered ways in which “walking the data” might inform our practice as researchers.  Click through to read an explanation of what I mean, an example or two of MALLET topic output, and how my own experience of “walking the data” as a lived event informs the analysis.

Read More

46th HICSS Workshop

Posted by on Feb 8, 2013 in Ethics, Methods, Research, Workshop | 0 comments

 

Workshop--Hawaii

Workshop–Hawaii

Hawaii.

The name itself evokes images of tropical sun, warm waters, surfing, and relaxation.

So what are the people in this image doing inside, intent on looking at computer screens? Instead of savoring the sunshine and walking on the sand, here they sit.  Inside.  Hunched over laptops.  Interpreting a series of instructions to make sense of social media data.  Listening to the SoMe Lab team explain what they are seeing.  They are not behaving as you imagine Hawaiian visitors would behave.

These dedicated researchers are taking part in the workshop organized by the SoMe Lab team at HICSS46, held this past January in Wailea on Maui. As a part of the workshop, they were hearing from the SoMe team about lessons the team has learned in the past fifteen months.

Read More

Visualizing threaded conversation volume and intensity

Posted by on Jan 24, 2013 in Data Visualization, Information Visualization, R, r-project | 5 comments

click for larger view

click for larger view

As a researcher interested in information flows in digital environments I’m often interested in finding patterns in social trace data. For this discussion we can think of digital social trace data as the text that people post into threaded topics on forums, like on Reddit or a Wiki Talk page on Wikipedia. One way to find patterns in this kind of data is to make visualizations based on different quantifiable dimensions in the data, for example, total topic volume per day, volume per thread per day, and, possibly, the intensity of the discussion (as interpreted by qualitative researchers). In the remainder of this post I will note what we can learn from our visualization as well as its limitations and then post the R code I used to make the plot.

Read More

R Gauge Plots

Posted by on Jan 17, 2013 in Data Visualization, Information Visualization, R, r-project | 9 comments

Click for larger viewGaston Sanchez’s post on R-Bloggers inspired me to waste a bit of time. He wanted to replicate the Google Charts widget to make gauges. I modified his code (below) in some minor ways and made a function out of it so you can alter the look and feel of your gauge. Feel free to pilfer and modify the R code…

 

Read More

Using R to visually compare the volume of different information sources

Posted by on Jan 16, 2013 in Data Visualization, Information Visualization, Media, R, r-project, Research | 0 comments

A couple of weeks ago Bob wrote about a post about a research note that was recently accepted to the iConference. In it we outline the beginnings of a research project where we look at the interaction of different media platforms (Twitter and Blogs) with more traditional sources. In this post I go through the R code we used to plot, and visually compare, the volume of different information sources.

The data for this example is randomly drawn along a Pareto distribution so anyone should be able to just open the file, run it and have plots. Like I did in the last R example, I have used comments in the code to explain what I’m doing in the creation of these plots. After the code I give a brief introduction on the tool I use to select colors.

Read More

Hi, HICSS46 Social Media Research Workshop participants!

Posted by on Jan 7, 2013 in Uncategorized | 0 comments

marthaThanks for visiting our workshop at HICSS46, or just being curious after spotting a tweet!

We recently held a workshop that dealt with our lessons learned in working with our corpus of data collected in reference to the Occupy Wall Street movement.  We took folks through a mock research project and one approach to how researchers might “do” social media research, with hands on examples.  You’ll find our “Working Document” that details these learned lessons, along with the slides presented at the workshop.  You can find them by clicking “read more” below!

Read More