Tag Archives: Twitter

Original image by Dean Terry

Improving the Social Network Analysis Methodology

We have been collecting data for just shy of a year now and have been developing the Twitter social network analysis methodology for a little longer than that. As you might recall, we have been following the Mobile Health conversation via the #mHealth conversation and have finalised the collection of those data. The processing is almost finished and we can now progress to the next stage of ethnography to further understand what we have collected.

We have been improving the methodology as we go and the last time we received some assistance was to write some code for the Gephi program. Recently, we have been talking with colleagues from the University of Wollongong’s SMART Infrastructure faculty, who have been developing the collection process of Twitter data. Their project is related to flooding information in Indonesia (CogniCity), however Tom Holderness has been kind enough to share his work on GitHub.

When we install this JavaScript, which uses NodeJS, we will have an automated version of the manual process we have been struggling with for the past year. Further, the code is customisable so the researcher can  query the Twitter Stream API for the specific data they require. You can read more about the CogniCity NodeJS application on GitHub.

If we can improve the processing speed further, we will have a research prototype that can be shared with other researchers who are interested in Twitter social network analysis – hopefully a post soonish will reveal this!

aaDH

Accepted paper for #aaDH2014!

We’re in! We just received notice that our social network analysis paper exploring the informal policy actors of mHealth across the the Twitter platform has been accepted for the Australasian Association of Digital Humanities (aaDH) conference 2014.

Here’s the abstract we will work from:

Scholarly interest in data privacy and the regulation of mobile Internet has intensified in recent years, particularly following Edward Snowden’s 2013 revelations about Prism, the US government’s secret communications surveillance and data mining project. Much analysis has focused on the politics and architectures of data privacy regulation and network access. However the surveillance moment also invites scrutiny of academic data gathering and mining online. In open governance movements such as Occupy there has already been considerable debate about the ethics of big data research, particularly where the aim is to track individuals’ online agency around political processes and policy activism. With that context in mind, this paper examines the methodological implications of conducting large-scale social network analysis using Twitter for mobile Internet policy research.

Mobile internet is emerging at the intersection of broadband internet, mobile telephony, digital television and new media locative and sensing technologies. The policy issues around the development of this complex ecology include debates about spectrum allocation and network development, content production and code generation, and the design and the operation of media and telecommunications technologies. However not all of these discussions occur in formal regulatory settings such as International Telecommunications Union or World information Summit meetings, and not all are between traditional policy actors. Increasingly social media platforms such as Twitter and Linked-in host new networks of expertise, informal multi-actor conversations about the future of mobile Internet that have the potential to influence formal policy processes, as occurred during the January 2012 SOPA/PIPA campaigns in the US.

As part of the three year Australian Research Council Discovery project Moving Media: Mobile internet and new policy modes, this research team is mapping and interpreting the interplay between these diverse policy actors in three areas of accelerating media development: digital news, mobile health and locative media. However research into informal policy networks and processes online presents interesting problems of scale, focus and interpretation, given the increased affordances for citizen participation within the international political arenas of social media.

To better understand who these online stakeholders might be in the mobile health field, and how they operate in relation to the normative policy and regulatory circuits, we have adopted a social network analysis methodology, in order to track Twitter-based social relationships and debates. Using a series of hashtags, including #mhealth, #mobilehealth and #healthapps to track ongoing policy-related exchanges, we have begun to identify who is influential in these spaces, what they are talking about and how their input to debate may impact on mobile internet regulation.

This paper will outline that SNA approach and highlight some of the procedural and ethical concerns surrounding big data collection and analysis, which are consistent across contemporary digital humanities research. These concerns include how we can use big data harvesting and analysis tools to align quantitative with qualitative methods, how we can justify our research claims via these tools and how we might better understand and implement these innovative research methods within the academy. In particular the paper will interrogate the methodological suggestion that qualitative methods lead quantitative research, considering instead whether a more rigorous approach is to invert the quantitative/qualitative relationship.

The combined conversation around healthapps, mhealth, apps and FDA

Mapping the mobile health policy actors: Who is talking to whom on Twitter, and to what effect?

This is a methodological post on some social network analysis work we are developing for Moving Media. The premise for the SNA research is reasonably simple:

Task: Perform social network analysis around the Twitter conversations about the FDA’s proposed health apps guidelines, posted July 19th 2011:

Public brief: http://www.fda.gov/forconsumers/consumerupdates/ucm263332.htm

Document: http://www.regulations.gov/#!documentDetail;D=FDA-2011-D-0530-0001

Comments and Submissions: http://www.regulations.gov/#!docketBrowser;rpp=25;po=0;dct=PS;D=FDA-2011-D-0530;refD=FDA-2011-D-0530-0001

Aim: To map the dispersed network of actors discussing the FDA policy consultation process in social media channels, visualising their relative influence and communicative relationships.

After some initial Twitter research, we found the #FDAApps hashtag to be the conversation we wanted to analyse. The only drawback is that this conversation seems to be unreachable – the Twitter API didn’t return anything although the conversation is there. Any suggestions on this would be appreciated. Following on from this I did a search across four conversations: #mhealth, #healthapps, #FDA, #apps. It is an experiment in both the methodology and the content.

Here’s the breakdown on the process (and it gets a bit nerdy from here):

1. I tracked four Twitter conversations (#mhealth, #healthapps, #apps & #FDA) and processed the data through the Twitter API, Open Refine and then into Gephi. I imported the .csv file into Open Refine to extract the @replies and the #hashtag conversations – a process of deleting much of the data and producing a .csv file Gephi likes. I then imported the data into Gephi, ran a Force Atlas and Frutcherman Reingold layout and ranked the labels by degree. I then played with the statistics slightly by running a Network Diameter across the network (Average Path length: 1.0508474576271187, Number of shortest paths: 236), which enabled my to colour the labels via their betweenness centrality on a scale of 0 – 6, Eccentricity 0-2 and closeness centrality 0 – 1.5. I then ran a modularity stat across it (Modularity: 0.790, Modularity with resolution: 0.790, Number of Communities: 18). 18 communities!

 

2. I did this for each set of data, that is #mhealth, #mobilehealth, #healthapps, #apps and #FDA. Each process provided a visualisation that demonstrates the key conversation hashtags and the most significant people in those conversations. Here’s the preliminary analysis:

#healthapps conversation

#healthapps conversation

FDA20130528

#FDA conversation

#apps conversation

#apps conversation

#mhealth conversation

#mhealth conversation

3. I then combined the cleaned data of the four conversations together to create a ‘super set’ to understand the broader ecology of the policy discussion around mhealth and health apps.

The combined conversation around healthapps, mhealth, apps and FDA

The combined conversation around healthapps, mhealth, apps and FDA

Preliminary analysis: What we know (and this is my first critical analysis of this process – it could change as I become more aware of what is going on here):

  • The conversation between the FDA and healthapps is stronger than the other two topics due to its location in the network
  • @Vanessa_Cacere is the most prominent twitter user in #apps (she often retweets our tweets too!)
  • @referralIMD is prominent in #mhealth
  • @MaverickNY is prominent in #healthapps
  • The bluer the colour of the actor, the closer they are to the topic – ‘closeness centrality’
  • @Paul_Sonnier [https://twitter.com/Paul_Sonnier] is extremely significant in the overall conversation – ‘betweenness centrality’
  • There are some other probably other significant terms here like #digitalhealth, #breakout, #telehealth, #telemedicine
  • It sucks some CPU processing power
  • The healthapps viz did not work so well, and I’m not sure why.

The limitations as of now:

  • This isn’t the #FDAApps conversation from July 2011 on, this is the mhealth conversation of the 28 May 2013
  • I’m not entirely sure it’s possible to construct an archive from events past – I need to look into this further
  • I think I can code a program that pings the Twitter API automatically every 20 seconds and then automatically adds it to the dataset. If I can build this, we can start tracking data from now on issues/conversations we think are important. I am manually doing this now, but it is really laborious.
  • There is conversations around #apps in general here too. A proper analysis will likely need to clean the raw data further to eliminate any inaccuracies of the representation

Any input on this process would be greatly appreciated and if you have any insights on the findings, please comment below.