How to: create a bubble chart from a Google Spreadsheet using D3.js

Earlier in this series I discussed how to get data out of a Google Spreadsheet in JSON format using an API call, and how to convert the JSON data into an array. Now I’m going to talk about how to visualise the data as a bubble chart on a web page, using the fantastically powerful JavaScript library D3.js, aka Data Driven Documents.

For this exercise I’ve created a Google Spreadsheet representing some information about a fictional group of people with a count of their interactions. You can see the spreadsheet here.

Following the instructions in the previous How To guides we can get this data using JSONP; you can see the result for yourself here.

So, having got the source data, how are we going to visualise it?

Well, the first step is to transform the data once again into a structure that is more suitable for the D3.js techniques we want to use. In this case we’re creating a bubble chart using a method called d3.layout.pack(). This takes a tree structure of objects, and fits them into a volume based on the value property of each leaf node. In our example, the value we’re interested in is the number of interactions – so team members with more interactions will be represented by larger bubbles within the visualisation.

So how do we do that? Well, the easiest approach is to iterate over each row in the data, and create an object for it with a name, a value and a group. (The group property in this case is the team the person belongs to.) These “leaf” objects can then be added to a “root” object to make a tree in JavaScript.

The code for this looks like so:

    var root = {};
    root.name = "Interactions";
    root.children = new Array();
    for (i=0;i<dataframe.length;i++){
      var item = {};
      item.name = dataframe[i][0];
      item.value = Number(dataframe[i][1]);
      item.group = dataframe[i][2];
      root.children.push(item);
    }

So, taking it one line at a time – we create a root object, give it a name, and create a new empty array inside it called children. We then we go through each row in the dataframe and create an item object for each one, mapping the name, value and group properties to the correct columns in the spreadsheet. Each item is added to the children array.

We now have a tree of objects, each of which has a name, a value and  a group.

How do we create a nice-looking bubble chart with them?

First we set up the d3.layout.pack function so it can calculate the size and position of the bubbles. We do this using:

var bubble = d3.layout.pack().sort(null).size([960,960]).padding(1.5);

If you were to now call …

bubble.nodes(root)

… and take a look at the output, you would see each “leaf” object  now has several new properties for “x”, “y” and “r”. The “x” and “y” properties are where within the chart to position the bubble, for the object and the “r” property is the radius of the bubble.

(How this is actually drawn is up to you – you could equally well take this information and draw the whole thing using hexagons or squares or spheres. But lets stick to circles for now.)

Next we need to create a graphic for the chart in our HTML page. D3 can make this for us like so:

    var svg = d3.select("body")
                .append("svg")
                .attr("width",960)
                .attr("height", 960)
                .attr("class","bubble");

For each “leaf” we then need to create a graphical element. D3.js uses a very clever approach for this:


    var node = svg.selectAll(".node")
                  .data(bubble.nodes(root)
                  .filter(function(d){ return !d.children;}))
                  .enter()
                  .append("g")
                  .attr("class","node")
                  .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });

The key thing here is the data() method. We pass this the bubble layout we created earlier, and ask it to create the nodes based on our root object. (We also filter out the root node itself as we’re not interested in drawing that, just the individual leaf nodes.) The enter() method is then called for each leaf node in the tree, which appends a <g> element to the <svg> element in our HTML document, and applies the transform property to it to place it at the correct x and y coordinates within the chart.

This still doesn’t draw anything interesting, so lets make some circles for each node, and give them a label:

   var colour = d3.scale.category10();
   node.append("circle")
       .attr("r", function(d) { return d.r; })
       .style("fill", function(d) { return colour(d.group); });
   node.append("text")
       .attr("dy", ".3em")
       .style("text-anchor", "middle")
       .text(function(d) { return d.name; });

The result of all this is a nice diagram! Click to view it full size; you can also see the live version here.

A bubble chart

The complete source code for this How To guide can be found on Github.

If you’d like to know more about data visualisation, you can get in touch with us at researchsupport@it.ox.ac.uk.

Posted in Data modelling and migration | Leave a comment

How to: convert Google Spreadsheet JSON data into a simple two-dimensional array

In a previous post I explained how to extract JSON data from a Google Spreadsheet via an API call.

However, when you actually get the data, the JSON isn’t really in the kind of structure you would imagine. Instead of a matrix of rows and columns, Google returns an RSS-style linear feed of “entries” for all of the cells!

So how to convert that into something that you can use in D3.js or R?

We need to iterate over each entry in the feed, and push the values into an array, moving to a new “line” in the array each time we get to a cell that is at the beginning of a row in the spreadsheet. I’ve written a JavaScript function to do the work necessary; you can get the code on Github.

Running this function we can then get the values from the resulting array using something like:

data[1][5]

Note that the function doesn’t differentiate the labels from a header row (which is something you’d commonly see, and which R would usually expect) so there is definitely room for improvement in the function.

Posted in Data modelling and migration | Leave a comment

How to: get data out of a Google spreadsheet using JSONP

Google Drive spreadsheets are a great way to collect data using their handy forms, but the visualisation tools sometimes aren’t sufficient. What if you wanted to do a visualisation using d3.js for example?

Google has an API for obtaining the data using JSONP; this means that the data is exposed in JSON format using a callback function – this gets around the “Same Origin Policy” restriction on accessing data from a different web domain.

To do this, you need to:

  1. Make your spreadsheet public
  2. Get the ID of your spreadsheet and worksheet
  3. Include a script tag calling the API
  4. Write a callback function to use the data

1. Make your spreadsheet public

In Google Drive, go to File > Publish to the web … and click Publish. You can only obtain data from a sheet that is publicly readable.

2. Get the ID of your spreadsheet and worksheet

This isn’t as obvious as it sounds. Your spreadsheet URL will contain some sort of long identifier, but this isn’t the only information you need – you also need the shorter worksheet ID as well.

You can find the worksheet number by calling a URL constructed like so:

https://spreadsheets.google.com/feeds/worksheets/your-spreadsheet-id/private/full

Note that you must be logged in to Google Drive to do this, or the URL will return nothing at all!

Calling this URL will return an RSS feed that will contain something like this:

<entry>
<id>https://spreadsheets.google.com/feeds/worksheets/your-spreadsheet-id/private/full/o10c0rt</id>
<updated>2014-10-08T11:35:31.493Z</updated>
<category scheme="http://schemas.google.com/spreadsheets/2006" term="http://schemas.google.com/spreadsheets/2006#worksheet"/>
<title type="text">Form Responses 1</title>

The information you need is in the <id> tag. The last part of the id is the worksheet identifier.

3.  Include a script tag calling the API

In your HTML, include a script tag, like so:

<script src="https://spreadsheets.google.com/feeds/cells/your-spreadsheet-id/your-worksheet-id/public/values?alt=json-in-script&callback=sheetLoaded"></script>

Obviously you need to replace “your-spreadsheet-id” and “your-worksheet-id” with the values from the previous step.

4. Write a callback function to use the data

In your javascript code you need to implement the callback function named in the script tag, so in the above example we need to do something like:

function sheetLoaded(spreadsheetdata) {
 // do something with spreadsheet data here
 console.log(spreadsheetdata);
}

Job done! Now you can actually start doing the clever D3 visualisation part…

Posted in Data modelling and migration | 1 Comment

Research at Risk: Report from the Jisc Co-Design Workshop

On the 22nd of September, I was invited to a Jisc Co-Design event on “Research at Risk”, with participants from organisations such as UCISA, RLUK, RUGIT, DCC, and of course some universities, including yours truly representing both the University of Oxford, and also as a special bonus the University of Bolton.

What follows are my completely informal and unofficial notes of the event.

Looking for the Gaps

This was about the need to properly map the entire architecture for RDM to identify where the gaps and joins are to inform decision making at different levels.

One issue we face is that many research data management solutions are barely past the prototype stage. Rather than build completely new services, it would make more sense to look at the solutions that are closest to matching requirements, such as CKAN and HYDRA, and work together to make them complete. The OSS Watch report on RDM tools highlighted the fact that many of the tools developed had very poor sustainability prospects, linked to the fact that they were developed with a small local user base and without long term sustainability planning. The next step could be to focus on a few solutions and ensure they are fit for purpose and sustainable.

Likewise, on the storage side there is already OwnCloud, which several institutions are interested in developing further. As an open source project, we can work on this collaboratively to ensure we have a good solution, while Jisc can work on the matching service offering for it for institutions that don’t have their own data center. Anyway, more on this later.

At a higher level, this whole area seems to be about taking stock of where we are now, which seems a pretty sensible thing to do.

What we know

Similar to the previous topic, but really about putting together the advice, guidance and lessons learned. UCISA were very keen on this one.

An interesting thing I learned about here was the “4Cs” cost exchange project that Jisc (or DCC, I wasn’t sure which) are engaged in, which seems to be principally about baselining IT costs against peers, including in areas such as RDM.

The Case for RDM

There seemed to be consensus that there is a gap between ideology and practice, and that while there is plenty of talk around mandates from the Research Councils and journals, there hasn’t really been very much from the researcher perspective, and this is something that needs to be addressed. So making the case, not from a mandate perspective, but from a benefits to researchers perspective.

One issue here is the different demands on research data depending on whether the intent is compliance, validation, reuse, or engagement. To make data truly reusable requires more effort than simply dumping it in an archive, but also can yield more benefits to researchers.

Changing Culture

This was seen as probably a good thing longer-term, but it wasn’t clear exactly what it would involve, or what role Jisc would play. For example, the previous three items taken together might constitute actions leading towards culture change. This also encompassed areas such as treating RDM as a professional skill and providing support for developing its practice. Another practical area is information sharing between institutions.

Making data count

This idea was all to do with metrics and measures, though it wasn’t clear what those metrics might look like. There could be some progress by combining existing measures and sources, such as DataCite, and then seeing where that leads.

Simplifying Compliance

There was an amusing comparison between RDM compliance and Health and Safety. However, we have the current situation where compliance is not standardised between the Research Councils, or between the Councils and the journals that mandate RDM. Help and support on compliance is also outdated, or difficult to find.

Another topic we discussed was something I’ve dubbed (wearing my University of Bolton hat) as “barely adequate research infrastructure for institutions that only give half a toss” – basically, many Universities are not research intensive and do not have dedicated resource in either Library or IT Services to support RDM, or even Open Access. Instead, a simple hosted solution with a reasonable subscription rate would be absolutely fine.

What was interesting is that some of the research intensive universities were also keen on this idea – can we just have ePrints+CKAN+DataCite+etc all set up for us, hosted, Shibbolized, configured to meet whatever the Research Councils want, and ready to just bung on a University logo?

Simplifying Data Management Plans (DMP)

There seemed to be a general feeling that it isn’t clear who should be writing DMPs, or why they should be doing it. In some cases it seems that research support staff are producing these instead of researchers, which seems sensible. The general feeling is that creating a DMP is something you do for someone else’s benefit.

Some institutions have been customising DMPOnline. Interestingly, one area that gets explored is “model DMPs” or “copy and paste”. I somewhat cheekily suggested a button that, once pressed, generates a plausible-sounding DMP that doesn’t actually commit you to anything.

In any case, if compliance requirements are simplified and standardised (see above) then this would also in effect simplify the needs for DMPs.

Other ideas explored included being able to export a DMP as a “data paper” for publication and peer review, though I’m not sure exactly how that contributes to knowledge.

So again we have the issue of what’s in it for researchers, and the tension between treating RDM as a hoop to jump through, or something with intrinsic benefit for researchers.

Metadata

There was a case made for this by DCC (Correction – Actually it was Neil Jacobs – Thanks Rachel!), which is basically around standardising the metadata profile for archiving research data, working on DataCITE, CRIS, PURE, ORCID, achieving consensus on a core schema and so on.

This sparked off a debate, my own contribution being “it may be important for some, but don’t start here” which seemed to resonate with a few people.

There was also the interesting area of improving the metadata within the data itself – for example making the labels within data tables more explanatory to support reuse – rather than just adding more citation or discovery metadata.

Storage as a service

This was the only major “techie” discussion, and it was interesting to see how much convergence there was between the Universities present at the event. So we had the issue of how we work with Dropbox (which many researchers really like), through to how we make best use of cloud storage services as infrastructure.

I asked whether Jisc had met with DropBox to discuss potential collaboration and apparently they have, though it seems not with great success. This is a pity as one potential “win” would be for researchers to be able to make use of the DropBox client tools, but synchronised with a UK data centre, or even institutional data centres.

Another interesting dimension was that several institutions have been looking into OwnCloud as a Dropbox replacement, and there was strong interest in collaborating to add any missing capabilities to OwnCloud (its open source) to bring it up to parity. Maybe thats something Jisc could invest in.

Preservation

I hadn’t met Neil Grindley before, and was surprised to see he bore more than a passing resemblance to the late SF author Philip K Dick. But anyway, onto the topic.

Preservation (and managed destruction) is one of those topics that people are either passionate about, or sends them into a kind of stupefied trance. I’m one of the latter I’m afraid. Its probably very important.

The only thing I can add to this is that the issue of preserving not just the data, but the software needed to process it, is not something that has been considered as part of the scope of this programme by Jisc.

Its nice also that they are considering using hashes to verify data integrity.

The Voting

Using the ultra scientific method of putting numbered post-it notes onto sheets of paper, the ranking of ideas looked like this:

Activity area (Raw data) Number of votes 1 2 3 4 5
Looking for the gaps 224535343 9 0 2 3 2 2
What we know so far 5245154 7 1 1 0 2 3
Case for sharing research data 1144221211 10 5 3 0 2 0
Changing the culture of research 4 1 0 0 0 1 0
Measuring the impact 215125 6 2 2 0 0 2
Simplifying compliance 34232333411 11 2 2 5 2 0
Simplifying data management planning 255355213 9 1 2 2 0 4
Data about data 35525 5 0 1 1 0 3
Sharing the costs of data storage 32444 5 0 1 1 3 0
Data for the future 12541143 8 3 1 1 2 1
Interestingly enough, although “Storage” wasn’t ranked highly, it was the topic that seemed to spark the most discussion amongst the university representatives after the event closed, and several of us pledged to work together in future to collaborate on our various approaches to solve these issues,

Funding?

Of course, it being a Jisc event, we wanted to know if there was going to be any funding!
The good news is, that as well as funding a number of larger projects already through capital funding (e.g. BrissKit), there are plans afoot for a “Research Data Spring” competition for innovation projects, I guess following a similar pattern to the successful Summer of Student Innovation competition but targeted at researchers and IT staff in universities.

More!

If you’d like to know more about this event, and read the “official” notes, then just get in touch with us at researchsupport@it.ox.ac.uk.
Posted in News | 2 Comments

Jisc Summer of Student Innovation

This summer, Jisc ran its second Summer of Student innovation.  Via the Jisc Elevator, students can pitch their idea to improve the student experience using technology.  Successful projects receive £5000 and mentorship over the summer to develop and realise their idea.  Around 20 projects were received funding this year, covering student induction, study tools, learner feedback, and open access.  Some of the successful projects included Open Access Button, Lingoflow, and Vet-Revise.

7342232828_b7780db586_z

For the second year running, members of Research Support were invited to attend the SOSI Summer School events, providing advice and guidance on technical implementation, legal issues and business models to the projects.  Advice provided this year included introducing version control, good software engineering practices, identifying potential commercial partners, exploring different sustainability options and business models, and assessing technical feasibility of software designs.  If your research group could use advice in these or related areas, please contact researchsupport@it.ox.ac.uk to discuss your needs.

Image Credit: Innovation Lab

Posted in News | Leave a comment

Software Sustainability – Working with WWARN

Over the summer we defined several new Research Support specialisms, covering software selection, intellectual property, and software sustainability. These derive in part from our experiences in running OSS Watch, the national centre of expertise in open source software and open development.

Software Sustainability is all about delivering long-term value from investments in software, particularly where software is being developed as part of funded research activities. Researchers often need to build the software tools they need a part of their work on projects, but what happens after the project ends?

Our advice is – don’t wait to find out! Instead, its important to invest in sustainability  as early as possible – preferably right at the start of a project, or at least long enough before the project ends to ensure there are both the time and resources to develop a credible sustainability plan.

Over the summer we were contacted by WWARN – the World Wide Antimalarial Resistance Network – to talk about the sustainability of several pieces of software they had developed.

The WorldWide Antimalarial Resistance Network (WWARN) is led by Dr. Philippe Guerin, and is based in Africa, Asia and the Americas, with a coordinating centre and informatics in Oxford at the Centre for Tropical Medicine and Global Health. A key part of WWARN is its platform for collecting and mapping reports of resistance to antimalarial drugs, and the web tools used for displaying this data on the WWARN website such as the WWARN Molecular Surveyor.

A screenshot of the WWARN Molecular Surveyor

A screenshot of the WWARN Molecular Surveyor

The WWARN team had put a lot of effort into these tools, and were keen to see how they could continue to be developed and used, either to support researchers looking at other diseases, or for completely different fields of research where there may be similar needs.

To do this, they needed to develop a strategy for sharing the software, engaging with new contributors from outside WWARN, and a model for governing its future development.

Mark Johnson and I from Research Support have been working with WWARN for several months now laying the groundwork for the release of the software and supporting outreach activities. WWARN have selected a license (BSD), put their source code on Github, created documentation, and developed an advocacy plan for increasing awareness of the project and attracting users and developers.

We hope that this effort will provide value for the WWARN project in terms of driving further improvements to the software, and ensuring it is viable long into the future.

We’re also keen to see adoption and use across Oxford – we’re sure WWARN aren’t the only researchers needing this kind of visualisation software, and the more groups that use it, the more sustainable the software becomes.

If you’re interested in using the software, post a message to the Maps Surveyor Google Group.

If you’re research group is planning to develop software, or has already done so and want to talk to us about sustainability, then contact us at researchsupport@it.ox.ac.uk.

Posted in News | Leave a comment

3-month report: June to August 2014

Highlights

IT Services internship: Suzy and Adelina setting up for an interview

IT Services internship: Suzy and Adelina setting up for an interview

Congratulations to the ORDS project team! We can now offer a service for researchers to securely create, populate and share relational databases. This is the culmination effort from across IT Services over several years where the team navigated a complicated funding environment. James W, Meriel and Mark will now promote and support researchers in using the service as part of an early-life support IT service project.

This has also been another great year for the Digital Humanities Summer School. I won’t say much here because James C has written a comprehensive DHoxSS 2014 report himself. Needless to say, congratulations to James, Sebastian, Scott, Kathryn W, and all the teachers involved in giving workshops.

We hosted Adelina Tomovo and Suzy Shepherd as part of the IT Services Internship programme. Suzy and Adelina worked with Rowan to create the new Open Spires website. The website is based on what we learnt on the Jisc Software Hub project last year and uses the same Drupal cataloguing admin interface. The new site is more focused on marketing open resources and in much of the effort went into making compelling film and a visually appealing and well written site. This is the first step for open spires project and Rowan will be taking it forwards over the coming months.

Progress against plans for last 3 months

Engagement statistics, June to August 2014

Engagement statistics, June to August 2014

There’s a considerable drop in engagement this quarter compared with others, but this is to be expected during June to August when most researchers are away. We use this period to focus on projects, events and web resource development (and indeed go on holidays ourselves).

  1. With regards to project planning (1) we are not involved in the Matrix replacement and X5 updates, (2) the ORDS ELS project has been approved, (3) OxGarage project brief is about to be started, (4) Oxford BNC-web PID will be submitted to the next review, (5) we will take an updated Live Data PID to the RDM working group in November, (6) we are not involved in the researcher dashboard at this stage, (7) the datastage project has been renamed to ReDDs and the scope changed to just focus on deposit from ORDS to ORA:Data, and the project request has been approved.
  2. DHOxSS 2014 was a resounding success, including special recognition for the organisers from the Director.
  3. The first phase of the open oxford portal, called OxSpires went live end of August.
  4. CatCore, Diva and OxGame projects are closed.
  5. James has passed the detailed for another ‘what to do with data series’ to the ITLP team
  6. ORDS is now live

Plans for next 3 months

  1. Continue to define and advertise our specialist services, and organise ourselves into teams according to the demand for each one.
  2. Grow the ORDS user community (and fix any bugs that surface)
  3. Successfully deliver our parts of current projects: VALS, Web CMS Service – Phase 1, DiXit
  4. Try to get new projects funded e.g. Live Data, OxGarage, Oxford BNC-web, Redds
  5. Contribute to the StaaS project by delivering the oxdropbox work package
  6. Initiate a new internal project (in first instance) to work out how to deliver a whole lab RDM solution such as the Hub (see Jisc Neurohub project)
  7. Deliver our communications plan
  8. Work with comms team to contribute to the new IT services discovery and engagement site
  9. Work with management to hone the focus on the research support team i.e. workshop on 28th October
Posted in News, Reports | Leave a comment

3-month report: March to May 2014

Highlights

Magdalena Turska profile

Magdalena Turska joins the research support team.

James Cummings has recruited the DiXiT project’s Oxford Marie Curie ITN Experienced Researcher: Magdalena Turska started started work on 1 April 2014 for 20 months investigating and implementing improvements to the requirements for publication of scholarly digital editions. During that 20 months she’ll also have secondments to King’s College London and the SynchRo Soft (who make the oXygen XML editor) in Romania.

The OSS Watch service team has been working with the VALS team to plan the Semester of Code – work experience on open source projects as part of undergraduate degree courses.  We are currently signing up projects willing to provide mentoring. Mark, Scott and Rowan will support the Semester of Code initiative throughout the next academic year.

James Cummings on YouTube

James Cummings on the Digital Humanities summer school (click image to watch the video on YouTube)

We have recruited two interns to work on the ‘open Oxford’ website over the summer. Rowan Wilson will lead this project which aims to showcase University open resources that can be used in research and teaching.

Meriel is taking a lead on managing our communications plan. This means gathering newsworthy activities and promoting them to different channels. For instance we had 3 items in the latest IT Services newsletter. Meriel has weaved some magic, Google Analytics now claims the following headline figures for this team website:

  • Between 1st March and 29th May 2014, the Research Support team website received a total of 1208 page views.
  • This represents an increase of 59% over the previous 3-month period.
  • The most frequently visited page was the one advertising the ‘Things To Do With Data’ talk series, which received just over a quarter of all page views.
  • Next most popular were the About page (18%), the RDM courses page (11%), and the Blog (11%).
Team website stats March-May 2014

Team website stats March-May 2014

We submitted our final financial forecast for this academic year. Our IT Services and departmental recharges, along with externally-funded project work have meant that we significantly exceeded our target.

Progress against plans for last 3 months

Engagement statistics, March to May 2014

Engagement statistics, March to May 2014

  1. The Things to do with data lunchtime talk series is underway. Most popular so far was the talk about securing data in the cloud. Meriel gave an excellent overview of the basics of research data management.
  2. (a) The Software Hub project is still open, but only until 1st October. We agreed with Jisc to use the underspend to fund 2 x 8-week internships over the summer. (b) The OxGAME project is nearing conclusion too: David and Howard will give a seminar to the Complex Systems group at York University, and attend an Understanding Risks event with Pablo Suarez in London.
  3. There are teething issues with the ORDS software which are delaying a soft launch.
  4. We are still using RT to manage support requests from researchers. The plan is to move to the new system over the summer.
  5. Our measures of engagement (advice and teaching) for the last 3 months are up in the Humanities and down for Medical Sciences, but otherwise look fairly healthy.

Plans for next 3 months

  1. Write/contribute to 7 x IT Services project initiation documents (PIDs): (1) Matrix replacement and X5 updates (2) ORDS early life support (3) OxGarage project to service (4) Oxford-BNC Web (5) Live Data (6) Researchers dashboard (7) DataStage development
  2. Deliver another successful Digital Humanities at Oxford Summer School (Directed by James Cummings).
  3. Create the open Oxford portal and manage two productive internships
  4. Finish the CatCor, DIVA, and OxGAME projects
  5. Plan the next series of Things to do with data talks to run first academic term (Oct-Dec).
  6. Launch ORDS
Posted in Reports | Leave a comment

3-month report: Dec to Feb 2014

Highlights

Oxford University Research Data Support website

Oxford University Research Data Support website

The University launched a data support service for researchers website to bring together all the support researchers can currently get with respect to managing their data. Our team will work with colleagues in the libraries and admin departments to grow a comprehensive research data management service.

We have made new contacts with researchers in medical division who showed interest in agent-based modelling and research data management. Within just a couple of weeks we were included in 3 different research proposals (which we should hear about the outcome soon). We also plan to submit regular IT services news to the medical division newsletter with Damion Young.

James Cummings is working with the Social Sciences research facilitator to increase awareness of our services within the division.

The graphics below show that we are making some headway with evening the spread of our support engagements across the divisions, which is good news.

Progress against plans for last 3 monthsnumber of engagementsHow we engaged with researchers Dec-Feb 2014

  1. James and Meriel have started finding speakers for a series of Research Data lunchtime talks that will be delivered through the IT Learning Programme in Trinity term. This effort is helped greatly by the launch of a new research data website. Meriel has given RDM workshops for MPLS, Social Sciences, Medical Sciences, and is preparing one for Humanities. This equates to ~60 researchers (DPhils and PostDocs) attending a 2 or 3 hour session which is delivered as a lecture with activities. Meriel is also making her RDM teaching materials available online.
  2. Howard has failed to close the OxGAME and Jisc Software Hub projects. Part of the delay has been uncertainty when a second visit to Cameroon can happen due to a lack of funds. This will now happen in April so Howard has a concrete deadline to deliver the software. The Jisc Software Hub project will be closed soon but changes at Jisc mean there is some uncertainty how this will happen.
  3. The ORDS software is still under development with the Sysdev team working on the authentication/authorization code. We are seeing new clients wanting the service, and importantly they want to use the software in very similar ways i.e. a relational database view of a flat file repository of data e.g. survey results.
  4. The main development with respect to our ability to support researchers using specific tools is the knowledge management system feature of the replacement for RT. This will make it easier for us to do research into tools and document what we find as we go. For example we had a query about fsQCA software last week.
  5. The 7-week ABM course went well with more students creating their own substantive models (writing code) than normal.

Plans for next 3 months

  1. This is an important financial reporting period where our priority will be to make sure we close projects and invoice for completed work. This particularly applies to the Jisc Software Hub, iSicily, Leap, Catcor and Torch projects.
  2. Focus on our web presence: (a) Making sure we are linked to from relevant University sites (b) Updating the SLDs for each aspect of our service (c) Making more content available e.g. teaching materials (d) Using Google Analytics to monitor which content is ‘consumed’, and why. For example the graphic below shows that we have very low number of visits to our site but this picks up when you include a link to course materials within a page that gets sent on the ITLP mailing list (13th Feb).

RS website stats feb 2014

Posted in Reports | Leave a comment

Four presentations to Medical Division departments due to a short little message

In November we asked Alison Brindle to include the following in her division newsletter:

Ken Kahn and Howard Noble, who work for the Research Support team of Academic IT, a group within IT services, would like to offer their services to your department. They have done some interesting work on pandemics with agent-based modelling, and are building expertise in research data management, and would like to share this work with interested staff/students. For instance, they could present a talk on agent-based modelling, and its application in research, teaching and science outreach which might be useful for groups interested in exploring new tools, making games to help explain scientific concepts, and introducing students to this approach for doing research. If you would like Ken or Howard to give a talk in your department, please contact them directly at kenneth.kahn@it.ox.ac.uk and howard.noble@it.ox.ac.uk. You can find out more about the services this team provide here: http://blogs.it.ox.ac.uk/acit-rs-team/about/

This led to invitations to give presentations at

  1. Weatherall Institute of Molecular Medicine
  2. Tropical Medicine Centre
  3. Diabetes Trial Unit
  4. Department of Public Health

Three  proposals were submitted in a very short time in which we are included in a work package.  And more proposals are under discussion. The turnout and interest in our presentations was very high sometimes leading to long follow-up discussions.

Posted in Events | 1 Comment