The return on investment and the impact of STI policies is a crucial issue in most countries. This was the topic of a symposia at the AAAS meeting 2013 in Boston. The symposia made me think that behind these discussions on returns and impact there are three crucial issues and questions that might be interesting to address: One is what is ‘return’ or ‘impact’ more concretely and how have different approaches changesd through time?; Second is how to measure impact and return?; and third, what do policy-makers actually do with these measurements afterwards (the impact of impact measurement)?.
“Return on investment” refers generally to the outputs, most typically the economic outputs, that result from the public investments that governments have done in R&D. These public investments are not only those put on specific science and technology programs and on research councils, but also to the more basic and direct public expenditures on universities, on public research organizations, and on research infrastructures.
“Impact” refers more generally to the scientific, social, economic, etc effects of the outcomes of research activities. When discussing “impact” it is important to note that on the one hand, there are the concrete outcomes of scientific research (i.e. scientific results of biomedical research), and on the other hand there are the delivery of the services/societal solutions out from these research outcomes (i.e. the medical services/solutions that can be developed out from those research outcomes). These are two different things, but both linked to “impact”, and it is difficult to measure both.
Regarding the histocal approach to this issue, it is worth remembering that the voices and political requests for showing the “value for money” of public R&D investments started in the 1990s, along with what has been termed “the new social contract for science”, refering to the new demands that society puts on scientists to show their value and come down from the ivory tower. The focus on the returns on investment and on impact more generally have perhaps exacerbated with the advent of the financial and economic crisis since 2008, and the significant budgetary cuts in virtually all areas of public expenditure.
“Investment” is a key word here, and it was noted by the AAAS panelists that in his recent speech on the State of the Union, President Obama made an explicit reference to a program which gave more than 100 dollars in return for each single dollar invested in it.
Yet, my second point is that by “return on investment” and “impact”, governments mean very different things. Some look at the scientific outcomes that result from R&D investment in universities, and public research organizations. Typically, most countries measure the quantity of research publications and outcomes. More recently, however, countries have been also willing to measure the quality of those outcomes, but this is a more difficult thing to do. Norway’s bibliometric classification system is a case at hand, as it assigns quality measurment to specific scientifi journals. Other countries, like the UK, have created sophisticated systems for peer-reviewing research outcomes quality.
At the AAAS meeting, Alison Allden made an exciting and realistic presentation about the new approaches for measuring the impact of research and innovation in the UK, which is soon going to move from the RAE (Research Assessment Exercise) towards the Research Excellence Framework (REF) (from 2014 on). The new framework will measure research results on the basis of three and wider criteria than before: 65% on outputs (outputs per researcher), 20% impact (reach) , and environmental data (contextual). This has naturally launched a hot debate in the UK, where the universities are those who “bear the burden of proof” to show the results of their research. Critical voices have been concerned with the difficulties for individual universities to provide documentation of these three dimensions of research assessment, and with the problems of measuring the long-term results of basic and collaborative research. Furthermore, the anti-impact movement in the UK has stressed that measuring impact can sweep aside the curiosity-based type of research activities.
This serves to bring forward the third and last point today, namely, what do policy-makers actually do with these measurements (the impact of impact measurement)? Whereas the UK uses systematically this assessment mechanism to allocate national public funds to reseach conducted at universities, in other countries these measurements are not used so extensively. In fact, they might be used very little, other than to stimulate a cross-national university benchmarking and ranking. Somehow this reluctance of national governments to use pro-actively these measurements might be based in socio-political reasons and debates, but also perhaps in the implicit understanding that “impact” and “return on investment” are at the end of the day extremely difficult to determine.
Photo credits: http://www.flickr.com/photos/pyth0ns/4571657460/