Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk
The PDF of this article exists for purposes of readability and portability. Please see the 13 May 2015 edition of Research Professional for citation. Shortlink: http://wp.me/p1Bfg0-27U
Author’s Note:
-
This article originally appeared in the 13 May 2015 edition of Research Professional, a UK-based website associated with Research Fortnight, the main newsletter for British academic researchers. It is based upon work supported by the US National Science Foundation under Grant No. 1445121. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).
Image credit: Wolfram Burner, via flickr
In this article, I write as the UK partner of an exploratory project funded by the US National Science Foundation to critically evaluate current approaches to the broader ‘impacts’ of research. Our aim is to develop an agenda for understanding both the ‘how’ and ‘how much’ of the impact that humanistic, scientific and technical research has on societal well-being. By the time of our capstone Washington workshop in February 2016, we should be able to address systematically the bottom-line question of all research funding policy: What counts as ‘value for money’?
In this context, the UK is seen as the world’s laboratory for testing alternative approaches to research policy, most notably through iterations of what is now called the ‘Research Excellence Framework’.
To be sure, the UK’s readiness to experiment has been a mixed blessing. It reflects strong cross-party agreement on the centrality of universities to the emerging ‘knowledge economy’, combined with a failure of the academic community to come up with compelling alternative accounts of the value of their knowledge for society. The battles fought over the meaning of ‘impact’ epitomize this tension.
Seen Stateside, the prevalent understanding of ‘impact’ in the UK appears skewed and restricted, though it reflects the interests of the dominant voices in the discussion. Thus, citations in policy documents, business plans and civil society projects count as ‘impact’ but citations in the media – mass or social — and academic course reading lists do not count.
Academics often fail to contribute constructively to the discussion by repeatedly returning to the idea that research needs to pass the review of academic peers before being deemed worthy of having impact, even though the exact relationship between peer review and research impact remains murky.
UK government policy has been effectively a compromise that installs a schizoid research assessment system, in which both the most vocal external stakeholders and the most recalcitrant elite academics are each given their due. Thus, there is little expectation that research submitted as having had ‘impact’ will necessarily be research that peers have deemed most meritorious.
This situation strikes us as very strange – much better explained in terms of tribal politics than of a society’s collective intelligence. Consequently we have argued that a broad remit for this project is necessary.
The major questions surrounding research impact that we plan to address include the following:
1. What are the mechanisms by which research comes to have ‘impact’ in some specified sense?
2. Can research impact be measured? If so, to what extent and with which instruments?
3. Does the very use of the term ‘impact’ for the consequences of research skew where to look for the consequences and how to think about them?
4. How does one assess the impact of research that aims to inform and enlighten the public but without a specific policy target?
5. Should ‘research impact’ include the impact that research has on the non-research functions of academia – especially the design of curricula?
6. Are adequate procedures in place to learn from the results of the various experiments in assessing research impact?
Cutting across all these questions is what may be called the ‘temporal horizons’ of research impact: How long should one wait before deciding whether research has had sufficient impact? There are radically polarized views on this matter amongst both academics and policy makers.
Some believe that research that fails to alter either the research frontier or the policy agenda within a normal assessment cycle – say, five years – has simply had no impact. Time, then, to cut losses and move on. Others take the exact opposite view, arguing that real research impact is marked by a very long half-life, which implies that most of the impact should not be felt in a given assessment cycle. On this view, research is more like an investment that returns a stream of benefits (however defined) than a product or service whose value that performs soon after delivery.
It is tempting to stereotype these contrasting attitudes in terms of, on the one hand, the harder or ‘proper’ sciences, and on the other, the softer or more humanistic disciplines. However, both attitudes can be found in both sets of fields.
What perhaps better explains the difference is whether one is talking about, on the one hand, the generation of data, and on the other, the validation of theories. While they are both necessary and complementary features of any form of research, they proceed according to quite different rhythms.
Data-driven research tends to have a short shelf life. It must prove its relevance to theory or policy quickly before it is superseded by better studies. In contrast, theories typically need a long lead time to fully mature, during which they generate data streams of their own.
In any case, as we already see, the temporal horizons of research impact need to be considered from several different angles:
As an investment portfolio issue. A funding agency may wish to operate with a diversified portfolio of research projects that yield varying rates of return over time. How do these anticipated rates of return influence the size of funding, especially where budgets are tight and highly accountable?
As a credit-blame assignment issue. Since ‘research impact’ presupposes that research is having an effect on other things (both inside and outside the research system), it should be possible to say that a line of research has had a positive or negative effect in particular spheres of activity. Is there an optimal time-frame for making such judgements?
As a public relations issue. The fact that research is increasingly being done in order to have impact generates new mediating fields of ‘anticipatory governance’ that have now absorbed the energies of many researchers who study the social impact of science and technology. However, the idea that even theoretical science should be orchestrated to have maximum impact goes back to Galileo and Newton.
As a legitimation issue. Once retrospective measures of impact are permitted, a competitive space is opened for claiming that generally recognized positive impacts in research and society constitute downstream effects of one’s own earlier research. The point goes to the role of temporal horizons in constraining the plausibility of impact narratives.
Without expecting to have answered all the questions raised here, we nevertheless hope to provide a roadmap that both academics and policymakers might study as they plan their future attempts to support research with the ‘right’ sort of societal impact.
Categories: Comments
Science policy should take into account, to my mind, the specificity of current science. Now the boundery between academic science, its theories and their application in practice is smoothed, and the time between these two events is close to zero. In such sciences as biomedicine, communication theories, ecology and some others social problems are included in the process of research from the beginning, and they are solved (or not solved) together with scientific problems. Moreover, the results of research not only help to overcome the difficulties both in science and society, but also determine their further development and set new goals. It would be more correct to say that science (in its new image) solves the problems of the state policy, and not vice versa. Academic science of the former type is gradually rebilding its structure.