Tag Archives: Gephi

Original image by Dean Terry

Improving the Social Network Analysis Methodology

We have been collecting data for just shy of a year now and have been developing the Twitter social network analysis methodology for a little longer than that. As you might recall, we have been following the Mobile Health conversation via the #mHealth conversation and have finalised the collection of those data. The processing is almost finished and we can now progress to the next stage of ethnography to further understand what we have collected.

We have been improving the methodology as we go and the last time we received some assistance was to write some code for the Gephi program. Recently, we have been talking with colleagues from the University of Wollongong’s SMART Infrastructure faculty, who have been developing the collection process of Twitter data. Their project is related to flooding information in Indonesia (CogniCity), however Tom Holderness has been kind enough to share his work on GitHub.

When we install this JavaScript, which uses NodeJS, we will have an automated version of the manual process we have been struggling with for the past year. Further, the code is customisable so the researcher can  query the Twitter Stream API for the specific data they require. You can read more about the CogniCity NodeJS application on GitHub.

If we can improve the processing speed further, we will have a research prototype that can be shared with other researchers who are interested in Twitter social network analysis – hopefully a post soonish will reveal this!

Gephi-logo

Improved Gephi processing through Java RAM allocation – downloadable

Recently, our social network analysis methodology hit a snag as the computer I am using started to crash when attempting to process our larger data sets. The data sets are not extremely large at this stage (approx 8MB Excel sheets with about 80 000 lines of text), but nonetheless too big for my MacBook Pro to handle. Just to remind you, we are using Gephi as our analytics software (open source)

I started looking into virtual servers where Amazon EC2 Virtual Servers are the benchmark in this domain. They seem to be located in Northern America, i.e. San Francisco, and I have been advised the geographical location of Amazon is good when scraping data from technology companies like Twitter and Facebook, who also host their data in a similar geographical area. However, Amazon does appear to be a little too expensive for the research budget – although very tempting to wind some servers up to collect and process our data quickly.

The second option was to lean on the national super computer infrastructure for Australian researchers, NeCTAR. I established two medium virtual servers (2 vCPU, 8GB RAM, 60GB local VM disk), installed a Ubuntu operating system, but had difficulty in talking with the system (happy to take input from anyone here).

Then, we had a meeting with Information and Communication Technology (ICT) people at the University of Sydney who have been very helpful in their approach. We have been liaising with Justin Chang who provided us with an improved version of Gephi that essentially enables us to use more RAM on my local machine to process the data sets. Justin provided me with a disk image that I installed, tested and was able to get moving with the analysis again.

I asked if I could share the Gephi with our readers, to which he agreed – and provided a step by step on how he created an improved RAM allocated version of Gephi:

- Download the ‘Gephi’ .dmg frill from: https://gephi.org/users/download/

- Open the .dmg file

- Copy the Gephi.app file to a folder on your desktop

- Ctrl + Click the Gephi.app file and click Show Package Contents

- Navigate Contents  > Resources > Gephi > etc and open the gephi.conf file in a text editor

- Change the maximum Java RAM allocation:

FROM:

default_options=”–branding gephi -J-Xms64m -J-Xmx512m -J-Xverify:none -J-Dsun.java2d.noddraw=true -J-Dsun.awt.noerasebackground=true -J-Dnetbeans.indexing.noFileRefresh=true -J-Dplugin.manager.check.interval=EVERY_DAY”

TO

default_options=”–branding gephi -J-Xms1024m -J-Xmx2048m -J-Xverify:none -J-Dsun.java2d.noddraw=true -J-Dsun.awt.noerasebackground=true -J-Dnetbeans.indexing.noFileRefresh=true -J-Dplugin.manager.check.interval=EVERY_DAY”

This enables Gephi to utilise up to 2GB RAM when processing data, you can allocate any amount of RAM here (as long as it is less than your systems RAM resources)

- save the file

- run the application ‘Disc Utility’

- from within Disc Utility click file > new > Disk Image from Folder and select the folder that you created on the desktop and then click Image.

You can download the DMG with the two versions of Gephi (1GB and 2GB).

The combined conversation around healthapps, mhealth, apps and FDA

Mapping the mobile health policy actors: Who is talking to whom on Twitter, and to what effect?

This is a methodological post on some social network analysis work we are developing for Moving Media. The premise for the SNA research is reasonably simple:

Task: Perform social network analysis around the Twitter conversations about the FDA’s proposed health apps guidelines, posted July 19th 2011:

Public brief: http://www.fda.gov/forconsumers/consumerupdates/ucm263332.htm

Document: http://www.regulations.gov/#!documentDetail;D=FDA-2011-D-0530-0001

Comments and Submissions: http://www.regulations.gov/#!docketBrowser;rpp=25;po=0;dct=PS;D=FDA-2011-D-0530;refD=FDA-2011-D-0530-0001

Aim: To map the dispersed network of actors discussing the FDA policy consultation process in social media channels, visualising their relative influence and communicative relationships.

After some initial Twitter research, we found the #FDAApps hashtag to be the conversation we wanted to analyse. The only drawback is that this conversation seems to be unreachable – the Twitter API didn’t return anything although the conversation is there. Any suggestions on this would be appreciated. Following on from this I did a search across four conversations: #mhealth, #healthapps, #FDA, #apps. It is an experiment in both the methodology and the content.

Here’s the breakdown on the process (and it gets a bit nerdy from here):

1. I tracked four Twitter conversations (#mhealth, #healthapps, #apps & #FDA) and processed the data through the Twitter API, Open Refine and then into Gephi. I imported the .csv file into Open Refine to extract the @replies and the #hashtag conversations – a process of deleting much of the data and producing a .csv file Gephi likes. I then imported the data into Gephi, ran a Force Atlas and Frutcherman Reingold layout and ranked the labels by degree. I then played with the statistics slightly by running a Network Diameter across the network (Average Path length: 1.0508474576271187, Number of shortest paths: 236), which enabled my to colour the labels via their betweenness centrality on a scale of 0 – 6, Eccentricity 0-2 and closeness centrality 0 – 1.5. I then ran a modularity stat across it (Modularity: 0.790, Modularity with resolution: 0.790, Number of Communities: 18). 18 communities!

 

2. I did this for each set of data, that is #mhealth, #mobilehealth, #healthapps, #apps and #FDA. Each process provided a visualisation that demonstrates the key conversation hashtags and the most significant people in those conversations. Here’s the preliminary analysis:

#healthapps conversation

#healthapps conversation

FDA20130528

#FDA conversation

#apps conversation

#apps conversation

#mhealth conversation

#mhealth conversation

3. I then combined the cleaned data of the four conversations together to create a ‘super set’ to understand the broader ecology of the policy discussion around mhealth and health apps.

The combined conversation around healthapps, mhealth, apps and FDA

The combined conversation around healthapps, mhealth, apps and FDA

Preliminary analysis: What we know (and this is my first critical analysis of this process – it could change as I become more aware of what is going on here):

  • The conversation between the FDA and healthapps is stronger than the other two topics due to its location in the network
  • @Vanessa_Cacere is the most prominent twitter user in #apps (she often retweets our tweets too!)
  • @referralIMD is prominent in #mhealth
  • @MaverickNY is prominent in #healthapps
  • The bluer the colour of the actor, the closer they are to the topic – ‘closeness centrality’
  • @Paul_Sonnier [https://twitter.com/Paul_Sonnier] is extremely significant in the overall conversation – ‘betweenness centrality’
  • There are some other probably other significant terms here like #digitalhealth, #breakout, #telehealth, #telemedicine
  • It sucks some CPU processing power
  • The healthapps viz did not work so well, and I’m not sure why.

The limitations as of now:

  • This isn’t the #FDAApps conversation from July 2011 on, this is the mhealth conversation of the 28 May 2013
  • I’m not entirely sure it’s possible to construct an archive from events past – I need to look into this further
  • I think I can code a program that pings the Twitter API automatically every 20 seconds and then automatically adds it to the dataset. If I can build this, we can start tracking data from now on issues/conversations we think are important. I am manually doing this now, but it is really laborious.
  • There is conversations around #apps in general here too. A proper analysis will likely need to clean the raw data further to eliminate any inaccuracies of the representation

Any input on this process would be greatly appreciated and if you have any insights on the findings, please comment below.