The Knowledge Exchange ‘Making Data Count’ Workshop – Metrics for Datasets

Last week I attended the ‘Making Data Count’ Workshop in Berlin, organised by the Knowledge Exchange. The objective of the workshop was to find ways to overcome the data-sharing reticence of researchers by ensuring that they were suitable rewarded for making their data publicly available. In some disciplines where the mutual benefits of sharing research data are well understood, the culture and practice of data sharing has grown substantially in recent decades. In others, however, the benefits to researchers are less clear, and researchers are generally correspondingly less keen on data sharing. Geoffrey Boulton saw this lack of recognition of mutual benefit to researchers as the crucial barrier to data sharing – once people realize that there is more benefit to be had by collectively managing and sharing data than by working in isolation, that situation will change. He also warned that achieving this cultural shift would be easier to achieve by selling the benefits of data sharing rather than emphasizing the need for compliance with researcher mandates. It was easy, he thought, for researchers to follow the letter of the funders’ law by putting their data online, but if it lacked sufficient context and documentation to enable others to re-use the data, it hardly constituted meaningful sharing.

The workshop went on to gather the various thoughts of the great and the good and to hear about the Knowledge Exchange’s new report: ‘The Value of Research Data’. After that, the workshop split into five task groups to suggest particular activities that might help overcome the problems identified. I joined the ‘metrics for datasets’ break-out group. I must admit that this choice was motivated less because it was an issue that had been keeping me awake at night, but rather because I suspect that metrics will become an issue once we have DataFinder up and running at Oxford, and particularly its ‘DataReporter’ interface for administrators. Hopefully others could solve the problems we were going to face with this before I even found out what those problems were.

Alas, nothing is that straightforward, but our group did have a good debate, and agreed with Jan Brase of DataCite that the first step might as well be to just start recording such metrics as could be recorded, whilst initiating a project or two to explore which metrics might actually prove most useful over the longer term. We also agreed that having one single indicator by which to evaluate data was a very bad idea, and that the effort that had gone into creating a dataset should probably be taken into account when evaluating it.

One aspect of the conversation that did concern me a little was how to measure the long-term impact of a dataset. Whilst it is all very well counting the number of links to datasets via Twitter, or ‘likes’ on Facebook, such metrics are to my mind a poor indication of the transformative effects that data sharing can (in theory) have on research, and on society more generally. In the UK the Research Excellence Framework now seeks evidence of the ‘impact’ of research, rather than simply citation counts, which I’m not sure is such an issue elsewhere in Europe. And I know from my humanities research background that whilst some research can have a short-term ‘wow’ factor that gets it mentioned in the newspapers, a lot of research is of interest to relatively few people, but can have a significant cumulative effect on a discipline over 10, 20, or even 50 years. To understand the value of such data, one needs to uncover the deep impact that it has, which is not so easy.

In the final coming-together session all of the break-out groups summarized their thoughts. The ‘Research Data Assessment’ group emphasized that the best measure of the quality of research data was the peer review mechanism, just as it was for articles, and that metrics should not be the basis of research assessment at all. This did not go entirely uncontested. Geoffrey Boulton (again) pointed out that peer review takes place not just at the publication stage, but rather that the most important ‘peer review’ was actually what one’s peers make of your research after it has been published. So perhaps metrics were back in the ring?

To my mind, a sensible synthesis of the views expressed would be to recognize that different forms of evaluation measure different aspects of data, just as they do traditional research. Peer review provides a measure of quality; metrics based on numbers of references provide a measure of interest in the research; and re-use a measure of value, especially where data is concerned. This latter element is arguably the most meaningful measure of impact, although simple references and citations are by no means worthless – they just measure something a little different. Even then, data can be re-used to different extents, and not all data re-use is easy to identify or measure. One of the questions we asked our researchers in our recent benchmarking survey at Oxford was whether they had ever been “inspired to undertake new or additional research as a result of looking at data that has been shared by researchers in the past”. 37% said they had. It’s quite feasible however that this inspirational data was not even cited in publications arising from this new or additional research. A narrative means of understanding the impact of shared data may therefore be more meaningful than a numerical one. One possible way to try to capture such impact might be to email researchers a few months, or even a year or two, after they had downloaded or accessed data from a data repository and simply ask them whether it made any difference to their research, and, if so, how? Of course, this would not capture every dataset they viewed, and, as we already know, researchers are busy people. Why would they spend time describing how they have used other researcher’s data if there was no incentive for them to do so? And so we turn full circle. Suggestions?

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Leave a Reply