Is your software open or fauxpen?

 

Is your software project open or “fauxpen”? Are there barriers in place preventing external developers from contributing? Barriers to commercial uptake? Barriers to understanding how the software or the project itself works?

These are the kind of questions that the OSS Watch team, in partnership with Pia Waugh, developed the Openness Rating to help you find out.

Using a series of questions covering legal issues, governance, standards, knowledge sharing and market access, the tool helps you to identify potential problem areas for users, contributors and partners.

We’ve used the Openness Rating at OSS Watch for several years as a key part of our consultancy work, but this is the first time we’ve made the app itself open for anyone to use.

It requires a fair bit of knowledge to get the most out of it, but even at a basic level its useful for highlighting questions that a project needs to be able to answer. If you have a software project developed within your research group, then you can use the app to get an idea of where the barriers might be. Likewise, you can use it if you’re considering contributing to a software project, for example when evaluating a platform to use as the basis of work in a research project.

Some of the questions do require a bit more specialist knowledge, but you can contact our team via email at researchsupport@it.ox.ac.uk to get help.

Get started with the Openness Rating tool.

Photo by Alan Levine used under CC-BY-SA.

Posted in Software sustainability | Leave a comment

3-month report: September to November 2014

Highlights 

As part of the DiXiT project our ER, Magdalena Turska, has been working with the TEI Consortium’s Mellon-funded TEI Simple project. Her work has involved the migration of the reference corpora into the TEI Simple tagset as well as prototyping an implementation of the TEI Simple Processing Model. The latter she presented at the TEI Conference in Chicago. The TEI Simple Processing Model aims to allow general specification of intended processing scenarios targeting multiple output formats by using extensions to the TEI ODD customisation language. Magdalena also travelled to Romania spending a week at the headquarters of DiXiT partner SyncRo Soft SRL implementing some additional features to the oXygen XML Editor’s TEI framework. Magdalena will be returning to Romania for a longer period in 2015. She also took the lead in organising and teaching an ‘Introduction to TEI’ workshop, assisted by James Cummings, on behalf of DiXiT in Warsaw in October that was very well received and resulted in a number of potential future partnerships. Upcoming plans for Oxford’s contributions to the DiXiT project include the analysis and publications of the results of Magdalena’s survey of publication infrastructures, continued implementation of the TEI Simple Processing Model, and preparation for DiXiT Camp 3 to be held in February in Borås, Sweden where she will again be doing some additional teaching.

Luke Norris, Ken Kahn and the fishing prototype created using the MIT app inventor

Luke Norris, Ken Kahn and the fishing prototype created using the MIT app inventor

ORDS ELS Update 1.0.6 has been released and fixes a number of software bugs. There are ten full research projects and 14 trial projects in the system at the moment, which is good progress towards the target of 20 full projects by September 2015.

The Things to do with data series is running again and we will soon release recordings of these talks online.

The Lecture capture project was funded and we will deliver a work package that will evaluate current solutions i.e. for recording and sharing lectures in a way that people can attend remotely and the footage can be shared afterwards with voice matched up with presentation slides.

The Oxford Innovation platform was launched for IT Services, Libraries and Museum staff. Our team contributed many ideas and comments and we look forward to finding out which are funded.

Luke Norris completed his 1 week work experience placement from Woodgreen school in Witney. Luke has already decided he wants to be a game programmer. Luke investigated tools for creating a game that fisherman (and other stakeholders) would play to design a common pool resource institition (aka sustainable fishing in light of climate change and the bleaching of coral reef that is happening very rapidly all around the world). Luke is 15 and in the last year of his GCSEs.

Progress against plans for last 3 months

Engagement statistics, September to November 2014

Engagement statistics, September to November 2014

  1. Meriel is leading our communications plan and we have requested a series of changes to the research support page on the IT Services website.
  2. The ORDS early life support project is underway and the team have just submitted release 1.0.6. We have also initiated the process to handover application ownership to the software solutions team.There are currently ten projects in the system.
  3. Current projects:
    1. VALS is a project that aims to provide “virtual placements” for computing students where they work with mentors on open source projects. So far 64 open source organisations have contributed 237 potential placements.
    2. WebCMS project has been put on hold until January 2015 but we are supporting the project by conducting requirements gathering and analysis exercises.
    3. DiXiT is a 3 year Marie Curie ITN where Oxford is employing Magdalena Turska for 20 months to look at scholarly digital edition publication infrastructure.
  4. We submitted the following project proposals to the research committee:
    1. OxLangCloud would provide online access for research purposes to members of the University, as well as authenticated and authorized users from other HEIs, to the large and growing number of textual resources managed by the
      University.
    2. Live Data would create a pilot data visualisation service for the research community at Oxford. The project will demonstrate how data sets can be visualised to promote public understanding of research.
    3. Participant Data would investigate how we can support academic researchers who need to maintain a database of participant details e.g. in order to conduct longitudinal social science studies, invite people in for psychology experiments or conduct vaccinatation trials.
    4. Redds would scope a deposit process for archiving databases created in ORDS
  5. We’re waiting to find out our role on the StaaS project i.e. supporting the selection of a tool that would make it easy for researchers to store data
  6. We decided not to look into whole lab RDM solution at this stage, and we have instead decided to focus on a project with software solutions that would deliver a coherant set of webservices for support research requests, with a particularl eye on more advanced requests e.g. making research data sets available for search, browse and visualisation.
  7. The communications plan is set up and we are submitting articles regularly e.g. to the medical sciences newsletter and IT Services communications
  8. We have not been able to implement the changes we need to make to the IT Services website because of the recent severe security issues that have hit Drupal instances.
  9. We ran a 3 hour meeting with service teams across IT Services who provide support for research i.e. research support, ITLP, software solutions, ARC team. The main outcomes are:
    1. Research support team to setup and implement a single point of contact for researchers and ensure that IT Services offers a high quality advice, support and guidance service for researchers who request IT-related advice.
    2. To change the research support page on the IT Services website to reflect the full range of services we provide i.e. ARC, ITLP, Crowdsourcing, Software Selection,

Plans for next 3 months

  1. Update research support service reporting based on what is requested by the Research Committee
  2. Deliver or continue ongoing projects: ORDS ELS, VALS, WebCMS, DiXiT
  3. Start new projects if funded i.e. OxLangCloud, Live Data, Participant data and Redds
  4. Plan our work on the lecture capture project that has just received funding
  5. Create a new wall of faces page within the Openspires site to feature researchers interested in the openness agenda, and create a new documentary style video focused on research data at Oxford.
Posted in Reports | Leave a comment

Where do Oxford researchers manage the source code for their software?

I’ve been taking a look around lately at the various places where researchers are keeping the source code for their software.

Its not an exhaustive survey by any means (though maybe we should do one of those), but it seems that there are two common options.

Octocat - the mascot of Github

Github is, as you would expect, a very popular place to host source code. Here you can find the Micron Oxford Bioimaging Unit, for example, the Oxford Clinical Trials Unit, and the Oxford Internet Institute. Its also where IT Services hosts its own open source projects. Even the New College JCR has its own space on Github!

GitHub is a good choice given its well known, has good supporting services such as issue tracking and website hosting, and lets you register an organisation as the owner of multiple projects. It also allows a small number of private repositories for free as well as unlimited public repositories.

The gitlab mascot

However, for research groups that need to manage private code repositories, or want to host the code locally, GitLab seems to be a popular option. GitLab provides many of the supporting services that you find on GitHub, such as issue tracking, but can be hosted locally with no limit on the number of private repositories, and can even be integrated with other services such as LDAP. You can find GitLab installations at Oxford in Mathematics, at the FMRIB, and the Bodleian.

Subversion logo

There are also a few Subversion repositories around; we use one in IT Services for managing our websites (among other things), and there’s one in Computer Science. Given that these are primarily for internal use I suspect there are quite a few more out there we aren’t aware of.

If you’d like help choosing where to host software source code for your research group, send us an email at researchsupport@it.ox.ac.uk

Posted in Software sustainability | Leave a comment

How to: create a bubble chart from a Google Spreadsheet using D3.js

Earlier in this series I discussed how to get data out of a Google Spreadsheet in JSON format using an API call, and how to convert the JSON data into an array. Now I’m going to talk about how to visualise the data as a bubble chart on a web page, using the fantastically powerful JavaScript library D3.js, aka Data Driven Documents.

For this exercise I’ve created a Google Spreadsheet representing some information about a fictional group of people with a count of their interactions. You can see the spreadsheet here.

Following the instructions in the previous How To guides we can get this data using JSONP; you can see the result for yourself here.

So, having got the source data, how are we going to visualise it?

Well, the first step is to transform the data once again into a structure that is more suitable for the D3.js techniques we want to use. In this case we’re creating a bubble chart using a method called d3.layout.pack(). This takes a tree structure of objects, and fits them into a volume based on the value property of each leaf node. In our example, the value we’re interested in is the number of interactions – so team members with more interactions will be represented by larger bubbles within the visualisation.

So how do we do that? Well, the easiest approach is to iterate over each row in the data, and create an object for it with a name, a value and a group. (The group property in this case is the team the person belongs to.) These “leaf” objects can then be added to a “root” object to make a tree in JavaScript.

The code for this looks like so:

    var root = {};
    root.name = "Interactions";
    root.children = new Array();
    for (i=0;i<dataframe.length;i++){
      var item = {};
      item.name = dataframe[i][0];
      item.value = Number(dataframe[i][1]);
      item.group = dataframe[i][2];
      root.children.push(item);
    }

So, taking it one line at a time – we create a root object, give it a name, and create a new empty array inside it called children. We then we go through each row in the dataframe and create an item object for each one, mapping the name, value and group properties to the correct columns in the spreadsheet. Each item is added to the children array.

We now have a tree of objects, each of which has a name, a value and  a group.

How do we create a nice-looking bubble chart with them?

First we set up the d3.layout.pack function so it can calculate the size and position of the bubbles. We do this using:

var bubble = d3.layout.pack().sort(null).size([960,960]).padding(1.5);

If you were to now call …

bubble.nodes(root)

… and take a look at the output, you would see each “leaf” object  now has several new properties for “x”, “y” and “r”. The “x” and “y” properties are where within the chart to position the bubble, for the object and the “r” property is the radius of the bubble.

(How this is actually drawn is up to you – you could equally well take this information and draw the whole thing using hexagons or squares or spheres. But lets stick to circles for now.)

Next we need to create a graphic for the chart in our HTML page. D3 can make this for us like so:

    var svg = d3.select("body")
                .append("svg")
                .attr("width",960)
                .attr("height", 960)
                .attr("class","bubble");

For each “leaf” we then need to create a graphical element. D3.js uses a very clever approach for this:


    var node = svg.selectAll(".node")
                  .data(bubble.nodes(root)
                  .filter(function(d){ return !d.children;}))
                  .enter()
                  .append("g")
                  .attr("class","node")
                  .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });

The key thing here is the data() method. We pass this the bubble layout we created earlier, and ask it to create the nodes based on our root object. (We also filter out the root node itself as we’re not interested in drawing that, just the individual leaf nodes.) The enter() method is then called for each leaf node in the tree, which appends a <g> element to the <svg> element in our HTML document, and applies the transform property to it to place it at the correct x and y coordinates within the chart.

This still doesn’t draw anything interesting, so lets make some circles for each node, and give them a label:

   var colour = d3.scale.category10();
   node.append("circle")
       .attr("r", function(d) { return d.r; })
       .style("fill", function(d) { return colour(d.group); });
   node.append("text")
       .attr("dy", ".3em")
       .style("text-anchor", "middle")
       .text(function(d) { return d.name; });

The result of all this is a nice diagram! Click to view it full size; you can also see the live version here.

A bubble chart

The complete source code for this How To guide can be found on Github.

If you’d like to know more about data visualisation, you can get in touch with us at researchsupport@it.ox.ac.uk.

Posted in Data modelling and migration | Leave a comment

How to: convert Google Spreadsheet JSON data into a simple two-dimensional array

In a previous post I explained how to extract JSON data from a Google Spreadsheet via an API call.

However, when you actually get the data, the JSON isn’t really in the kind of structure you would imagine. Instead of a matrix of rows and columns, Google returns an RSS-style linear feed of “entries” for all of the cells!

So how to convert that into something that you can use in D3.js or R?

We need to iterate over each entry in the feed, and push the values into an array, moving to a new “line” in the array each time we get to a cell that is at the beginning of a row in the spreadsheet. I’ve written a JavaScript function to do the work necessary; you can get the code on Github.

Running this function we can then get the values from the resulting array using something like:

data[1][5]

Note that the function doesn’t differentiate the labels from a header row (which is something you’d commonly see, and which R would usually expect) so there is definitely room for improvement in the function.

Posted in Data modelling and migration | Leave a comment

How to: get data out of a Google spreadsheet using JSONP

Google Drive spreadsheets are a great way to collect data using their handy forms, but the visualisation tools sometimes aren’t sufficient. What if you wanted to do a visualisation using d3.js for example?

Google has an API for obtaining the data using JSONP; this means that the data is exposed in JSON format using a callback function – this gets around the “Same Origin Policy” restriction on accessing data from a different web domain.

To do this, you need to:

  1. Make your spreadsheet public
  2. Get the ID of your spreadsheet and worksheet
  3. Include a script tag calling the API
  4. Write a callback function to use the data

1. Make your spreadsheet public

In Google Drive, go to File > Publish to the web … and click Publish. You can only obtain data from a sheet that is publicly readable.

2. Get the ID of your spreadsheet and worksheet

This isn’t as obvious as it sounds. Your spreadsheet URL will contain some sort of long identifier, but this isn’t the only information you need – you also need the shorter worksheet ID as well.

You can find the worksheet number by calling a URL constructed like so:

https://spreadsheets.google.com/feeds/worksheets/your-spreadsheet-id/private/full

Note that you must be logged in to Google Drive to do this, or the URL will return nothing at all!

Calling this URL will return an RSS feed that will contain something like this:

<entry>
<id>https://spreadsheets.google.com/feeds/worksheets/your-spreadsheet-id/private/full/o10c0rt</id>
<updated>2014-10-08T11:35:31.493Z</updated>
<category scheme="http://schemas.google.com/spreadsheets/2006" term="http://schemas.google.com/spreadsheets/2006#worksheet"/>
<title type="text">Form Responses 1</title>

The information you need is in the <id> tag. The last part of the id is the worksheet identifier.

3.  Include a script tag calling the API

In your HTML, include a script tag, like so:

<script src="https://spreadsheets.google.com/feeds/cells/your-spreadsheet-id/your-worksheet-id/public/values?alt=json-in-script&callback=sheetLoaded"></script>

Obviously you need to replace “your-spreadsheet-id” and “your-worksheet-id” with the values from the previous step.

4. Write a callback function to use the data

In your javascript code you need to implement the callback function named in the script tag, so in the above example we need to do something like:

function sheetLoaded(spreadsheetdata) {
 // do something with spreadsheet data here
 console.log(spreadsheetdata);
}

Job done! Now you can actually start doing the clever D3 visualisation part…

Posted in Data modelling and migration | 1 Comment

Research at Risk: Report from the Jisc Co-Design Workshop

On the 22nd of September, I was invited to a Jisc Co-Design event on “Research at Risk”, with participants from organisations such as UCISA, RLUK, RUGIT, DCC, and of course some universities, including yours truly representing both the University of Oxford, and also as a special bonus the University of Bolton.

What follows are my completely informal and unofficial notes of the event.

Looking for the Gaps

This was about the need to properly map the entire architecture for RDM to identify where the gaps and joins are to inform decision making at different levels.

One issue we face is that many research data management solutions are barely past the prototype stage. Rather than build completely new services, it would make more sense to look at the solutions that are closest to matching requirements, such as CKAN and HYDRA, and work together to make them complete. The OSS Watch report on RDM tools highlighted the fact that many of the tools developed had very poor sustainability prospects, linked to the fact that they were developed with a small local user base and without long term sustainability planning. The next step could be to focus on a few solutions and ensure they are fit for purpose and sustainable.

Likewise, on the storage side there is already OwnCloud, which several institutions are interested in developing further. As an open source project, we can work on this collaboratively to ensure we have a good solution, while Jisc can work on the matching service offering for it for institutions that don’t have their own data center. Anyway, more on this later.

At a higher level, this whole area seems to be about taking stock of where we are now, which seems a pretty sensible thing to do.

What we know

Similar to the previous topic, but really about putting together the advice, guidance and lessons learned. UCISA were very keen on this one.

An interesting thing I learned about here was the “4Cs” cost exchange project that Jisc (or DCC, I wasn’t sure which) are engaged in, which seems to be principally about baselining IT costs against peers, including in areas such as RDM.

The Case for RDM

There seemed to be consensus that there is a gap between ideology and practice, and that while there is plenty of talk around mandates from the Research Councils and journals, there hasn’t really been very much from the researcher perspective, and this is something that needs to be addressed. So making the case, not from a mandate perspective, but from a benefits to researchers perspective.

One issue here is the different demands on research data depending on whether the intent is compliance, validation, reuse, or engagement. To make data truly reusable requires more effort than simply dumping it in an archive, but also can yield more benefits to researchers.

Changing Culture

This was seen as probably a good thing longer-term, but it wasn’t clear exactly what it would involve, or what role Jisc would play. For example, the previous three items taken together might constitute actions leading towards culture change. This also encompassed areas such as treating RDM as a professional skill and providing support for developing its practice. Another practical area is information sharing between institutions.

Making data count

This idea was all to do with metrics and measures, though it wasn’t clear what those metrics might look like. There could be some progress by combining existing measures and sources, such as DataCite, and then seeing where that leads.

Simplifying Compliance

There was an amusing comparison between RDM compliance and Health and Safety. However, we have the current situation where compliance is not standardised between the Research Councils, or between the Councils and the journals that mandate RDM. Help and support on compliance is also outdated, or difficult to find.

Another topic we discussed was something I’ve dubbed (wearing my University of Bolton hat) as “barely adequate research infrastructure for institutions that only give half a toss” – basically, many Universities are not research intensive and do not have dedicated resource in either Library or IT Services to support RDM, or even Open Access. Instead, a simple hosted solution with a reasonable subscription rate would be absolutely fine.

What was interesting is that some of the research intensive universities were also keen on this idea – can we just have ePrints+CKAN+DataCite+etc all set up for us, hosted, Shibbolized, configured to meet whatever the Research Councils want, and ready to just bung on a University logo?

Simplifying Data Management Plans (DMP)

There seemed to be a general feeling that it isn’t clear who should be writing DMPs, or why they should be doing it. In some cases it seems that research support staff are producing these instead of researchers, which seems sensible. The general feeling is that creating a DMP is something you do for someone else’s benefit.

Some institutions have been customising DMPOnline. Interestingly, one area that gets explored is “model DMPs” or “copy and paste”. I somewhat cheekily suggested a button that, once pressed, generates a plausible-sounding DMP that doesn’t actually commit you to anything.

In any case, if compliance requirements are simplified and standardised (see above) then this would also in effect simplify the needs for DMPs.

Other ideas explored included being able to export a DMP as a “data paper” for publication and peer review, though I’m not sure exactly how that contributes to knowledge.

So again we have the issue of what’s in it for researchers, and the tension between treating RDM as a hoop to jump through, or something with intrinsic benefit for researchers.

Metadata

There was a case made for this by DCC (Correction – Actually it was Neil Jacobs – Thanks Rachel!), which is basically around standardising the metadata profile for archiving research data, working on DataCITE, CRIS, PURE, ORCID, achieving consensus on a core schema and so on.

This sparked off a debate, my own contribution being “it may be important for some, but don’t start here” which seemed to resonate with a few people.

There was also the interesting area of improving the metadata within the data itself – for example making the labels within data tables more explanatory to support reuse – rather than just adding more citation or discovery metadata.

Storage as a service

This was the only major “techie” discussion, and it was interesting to see how much convergence there was between the Universities present at the event. So we had the issue of how we work with Dropbox (which many researchers really like), through to how we make best use of cloud storage services as infrastructure.

I asked whether Jisc had met with DropBox to discuss potential collaboration and apparently they have, though it seems not with great success. This is a pity as one potential “win” would be for researchers to be able to make use of the DropBox client tools, but synchronised with a UK data centre, or even institutional data centres.

Another interesting dimension was that several institutions have been looking into OwnCloud as a Dropbox replacement, and there was strong interest in collaborating to add any missing capabilities to OwnCloud (its open source) to bring it up to parity. Maybe thats something Jisc could invest in.

Preservation

I hadn’t met Neil Grindley before, and was surprised to see he bore more than a passing resemblance to the late SF author Philip K Dick. But anyway, onto the topic.

Preservation (and managed destruction) is one of those topics that people are either passionate about, or sends them into a kind of stupefied trance. I’m one of the latter I’m afraid. Its probably very important.

The only thing I can add to this is that the issue of preserving not just the data, but the software needed to process it, is not something that has been considered as part of the scope of this programme by Jisc.

Its nice also that they are considering using hashes to verify data integrity.

The Voting

Using the ultra scientific method of putting numbered post-it notes onto sheets of paper, the ranking of ideas looked like this:

Activity area (Raw data) Number of votes 1 2 3 4 5
Looking for the gaps 224535343 9 0 2 3 2 2
What we know so far 5245154 7 1 1 0 2 3
Case for sharing research data 1144221211 10 5 3 0 2 0
Changing the culture of research 4 1 0 0 0 1 0
Measuring the impact 215125 6 2 2 0 0 2
Simplifying compliance 34232333411 11 2 2 5 2 0
Simplifying data management planning 255355213 9 1 2 2 0 4
Data about data 35525 5 0 1 1 0 3
Sharing the costs of data storage 32444 5 0 1 1 3 0
Data for the future 12541143 8 3 1 1 2 1
Interestingly enough, although “Storage” wasn’t ranked highly, it was the topic that seemed to spark the most discussion amongst the university representatives after the event closed, and several of us pledged to work together in future to collaborate on our various approaches to solve these issues,

Funding?

Of course, it being a Jisc event, we wanted to know if there was going to be any funding!
The good news is, that as well as funding a number of larger projects already through capital funding (e.g. BrissKit), there are plans afoot for a “Research Data Spring” competition for innovation projects, I guess following a similar pattern to the successful Summer of Student Innovation competition but targeted at researchers and IT staff in universities.

More!

If you’d like to know more about this event, and read the “official” notes, then just get in touch with us at researchsupport@it.ox.ac.uk.
Posted in News | 2 Comments

Jisc Summer of Student Innovation

This summer, Jisc ran its second Summer of Student innovation.  Via the Jisc Elevator, students can pitch their idea to improve the student experience using technology.  Successful projects receive £5000 and mentorship over the summer to develop and realise their idea.  Around 20 projects were received funding this year, covering student induction, study tools, learner feedback, and open access.  Some of the successful projects included Open Access Button, Lingoflow, and Vet-Revise.

7342232828_b7780db586_z

For the second year running, members of Research Support were invited to attend the SOSI Summer School events, providing advice and guidance on technical implementation, legal issues and business models to the projects.  Advice provided this year included introducing version control, good software engineering practices, identifying potential commercial partners, exploring different sustainability options and business models, and assessing technical feasibility of software designs.  If your research group could use advice in these or related areas, please contact researchsupport@it.ox.ac.uk to discuss your needs.

Image Credit: Innovation Lab

Posted in News | Leave a comment

Software Sustainability – Working with WWARN

Over the summer we defined several new Research Support specialisms, covering software selection, intellectual property, and software sustainability. These derive in part from our experiences in running OSS Watch, the national centre of expertise in open source software and open development.

Software Sustainability is all about delivering long-term value from investments in software, particularly where software is being developed as part of funded research activities. Researchers often need to build the software tools they need a part of their work on projects, but what happens after the project ends?

Our advice is – don’t wait to find out! Instead, its important to invest in sustainability  as early as possible – preferably right at the start of a project, or at least long enough before the project ends to ensure there are both the time and resources to develop a credible sustainability plan.

Over the summer we were contacted by WWARN – the World Wide Antimalarial Resistance Network – to talk about the sustainability of several pieces of software they had developed.

The WorldWide Antimalarial Resistance Network (WWARN) is led by Dr. Philippe Guerin, and is based in Africa, Asia and the Americas, with a coordinating centre and informatics in Oxford at the Centre for Tropical Medicine and Global Health. A key part of WWARN is its platform for collecting and mapping reports of resistance to antimalarial drugs, and the web tools used for displaying this data on the WWARN website such as the WWARN Molecular Surveyor.

A screenshot of the WWARN Molecular Surveyor

A screenshot of the WWARN Molecular Surveyor

The WWARN team had put a lot of effort into these tools, and were keen to see how they could continue to be developed and used, either to support researchers looking at other diseases, or for completely different fields of research where there may be similar needs.

To do this, they needed to develop a strategy for sharing the software, engaging with new contributors from outside WWARN, and a model for governing its future development.

Mark Johnson and I from Research Support have been working with WWARN for several months now laying the groundwork for the release of the software and supporting outreach activities. WWARN have selected a license (BSD), put their source code on Github, created documentation, and developed an advocacy plan for increasing awareness of the project and attracting users and developers.

We hope that this effort will provide value for the WWARN project in terms of driving further improvements to the software, and ensuring it is viable long into the future.

We’re also keen to see adoption and use across Oxford – we’re sure WWARN aren’t the only researchers needing this kind of visualisation software, and the more groups that use it, the more sustainable the software becomes.

If you’re interested in using the software, post a message to the Maps Surveyor Google Group.

If you’re research group is planning to develop software, or has already done so and want to talk to us about sustainability, then contact us at researchsupport@it.ox.ac.uk.

Posted in News | Leave a comment

3-month report: June to August 2014

Highlights

IT Services internship: Suzy and Adelina setting up for an interview

IT Services internship: Suzy and Adelina setting up for an interview

Congratulations to the ORDS project team! We can now offer a service for researchers to securely create, populate and share relational databases. This is the culmination effort from across IT Services over several years where the team navigated a complicated funding environment. James W, Meriel and Mark will now promote and support researchers in using the service as part of an early-life support IT service project.

This has also been another great year for the Digital Humanities Summer School. I won’t say much here because James C has written a comprehensive DHoxSS 2014 report himself. Needless to say, congratulations to James, Sebastian, Scott, Kathryn W, and all the teachers involved in giving workshops.

We hosted Adelina Tomovo and Suzy Shepherd as part of the IT Services Internship programme. Suzy and Adelina worked with Rowan to create the new Open Spires website. The website is based on what we learnt on the Jisc Software Hub project last year and uses the same Drupal cataloguing admin interface. The new site is more focused on marketing open resources and in much of the effort went into making compelling film and a visually appealing and well written site. This is the first step for open spires project and Rowan will be taking it forwards over the coming months.

Progress against plans for last 3 months

Engagement statistics, June to August 2014

Engagement statistics, June to August 2014

There’s a considerable drop in engagement this quarter compared with others, but this is to be expected during June to August when most researchers are away. We use this period to focus on projects, events and web resource development (and indeed go on holidays ourselves).

  1. With regards to project planning (1) we are not involved in the Matrix replacement and X5 updates, (2) the ORDS ELS project has been approved, (3) OxGarage project brief is about to be started, (4) Oxford BNC-web PID will be submitted to the next review, (5) we will take an updated Live Data PID to the RDM working group in November, (6) we are not involved in the researcher dashboard at this stage, (7) the datastage project has been renamed to ReDDs and the scope changed to just focus on deposit from ORDS to ORA:Data, and the project request has been approved.
  2. DHOxSS 2014 was a resounding success, including special recognition for the organisers from the Director.
  3. The first phase of the open oxford portal, called OxSpires went live end of August.
  4. CatCore, Diva and OxGame projects are closed.
  5. James has passed the detailed for another ‘what to do with data series’ to the ITLP team
  6. ORDS is now live

Plans for next 3 months

  1. Continue to define and advertise our specialist services, and organise ourselves into teams according to the demand for each one.
  2. Grow the ORDS user community (and fix any bugs that surface)
  3. Successfully deliver our parts of current projects: VALS, Web CMS Service – Phase 1, DiXit
  4. Try to get new projects funded e.g. Live Data, OxGarage, Oxford BNC-web, Redds
  5. Contribute to the StaaS project by delivering the oxdropbox work package
  6. Initiate a new internal project (in first instance) to work out how to deliver a whole lab RDM solution such as the Hub (see Jisc Neurohub project)
  7. Deliver our communications plan
  8. Work with comms team to contribute to the new IT services discovery and engagement site
  9. Work with management to hone the focus on the research support team i.e. workshop on 28th October
Posted in News, Reports | Leave a comment