On Epistemic Effects: A Reply to Castellani, Pontecorvo and Valente, Arie Rip

Author Information: Arie Rip, University of Twente, a.rip@utwente.nl

Rip, Arie. “On Epistemic Effects: A Reply to Castellani, Pontecorvo and Valente.” Social Epistemology Review and Reply Collective 4, no.1 (2014): 47-51.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Qo

Please refer to:

birmingham_library

Image credit: Elliott Brown, via flickr

It is important to critically consider ongoing changes in scientific practices and institutions, and do that on the basis of relevant data of sufficient scope and depth. Thus Castellani, Pontecorvo and Valente’s piece on epistemological consequences of bibliometrics is to be welcomed. However, I will also be critical in my commentary, in the spirit of organized scepticism. Doing so forced me to think through some of these issues again. After such an Auseinandersetzung (as the Germans can phrase it), we all will at least be able to articulate the issues better. 

In commenting on their text and thinking, I have to make a double detour. One detour is necessitated because the paper isn’t actually about consequences of bibliometrics (or better, bibliometrics-based evaluation, as the authors themselves phrase it occasionally), but about the consequences of proliferation of articles (cf. page 13, top). The other detour is about the methodology followed by the authors, where they interview scientists and scholars about their topic, eliciting experiences and opinions from their interviewees, but use these to articulate their own concerns about their topic. At the end, I have to come back to the challenging question of epistemic effects, or at least, epistemic changes linked to other changes.

Speaking of Bibliometrics

Let me start with the second detour first, to get it out of the way as it were. There is the terminological problem that ‘bibliometrics’ and sometimes ‘scientometrics’, i.e. scientific specialties in their own right, are used to refer to the practice of using quantitative publication-based indicators for evaluation and other decisions in the world of science and scholarship. Thus, the authors use curious phrases like: “This rapid growth of the impact of scientometrics in Italy may highlight some trends which may be hidden in countries where scientometrics represents a long standing habit” (3).

This sentence also reflects a methodological problem: evaluation of science is a recent phenomenon in Italy, so interviewing Italian scientists and scholars may elicit their concern about ongoing changes in their national system, rather than offering insights in a global phenomenon, the proliferation of journal articles. I did not see much of that, though.

The further, and general, problem is that the authors elicit a “common sense” about science, as they phrase it (page, 3 bottom), without paying much attention to the quality of this “common sense”. They do allow themselves to pick and choose quotes from the interviews to offer the reader a diagnosis of the situation, which is essentially their diagnosis, but now mouthed by the interviewees—the authors as ventriloquists! I hasten to add that such ventriloquizing is part of the art of writing social science papers, called “letting the data speak for themselves.” How to do that, and do it well, is an art in itself.

We had to address this challenge when we studied the phenomenon of highly cited papers and asked the authors of highly-cited papers about their experience and opinions. This is how we phrased it:

The scientists’ views may be regarded as expressions of “folk theories” concerning citations. They are based on their experience of scientific publishing, communication, recognition and rewards, and the stories that are told about citations, rather than on systematic study. There may be “citation myths”, and in some cases these could be identified by comparing the “folk theories” with findings from sociological and scientometric studies. On the other hand, valuable insights and knowledge concerning citations might be found in the “folk theories” which could be the starting point for further studies. This is particularly important because of the lacunae in our understanding of citations and their role in the world of science […] Taken together, the respondents’ answers and comments offer an informal (and fragmented) sociology of citations and their role. This is what we will present and also comment upon (Aksnes and Rip 2009).

Castellani, Pontecorvo and Valente add their own “folk theory” about quantitative evaluation in science and its effects when they report on the views of their interviewees. While there is no principle problem in doing that, they could have been more self-critical. Actually, many of the phenomena they point out derive from the fact of competition in science, and the particular and curious form of competition in science between competing colleagues, where the users of your product are also producers of competing products. Quantitative evaluation in the world of science adds to this competition, and may modify it, but does not drive the proliferation of publications. The term ‘least publishable unit’, and the practice it refers to, were visible already in the 1960s, long before evaluation of science with the help of quantitative publication measures emerged. And the authors may thus have been leading on their respondents to discuss effects of “bibliometrics” while they are actually talking about competition. (Cf. the quote on page 9: “It must be said clearly that one of the aspects of bibliometrics is the speed. I have to get out before my competitors who are working on the same thing.”)

Proliferating Articles and Journals

Here, I’m moving into my other detour already, about proliferation of publications, and in particular proliferation of journal articles. As I said, I find this to be the actual topic addressed in Castellani, Pontecorvo and Valente’s paper. If one draws a flow-diagram of their argument, proliferation of journal articles is block in the middle (cf. also page 2, asking about causes of the exponential growth of scientific production). To the right are its characteristics and to some extent its effects which are mapped on the basis of the interviews and Castellani, Pontecorvo and Valente’s own considerations. What is missing is a discussion of the function of journal articles as constituting knowledge reservoirs (or archives), other than the remarks on scientific documentation and retrieval (12-13). To the left are the causes and drivers, as well as the enabling conditions and confounding factors. Given this complexity, it is risky to single out one possible cause, and subsequently assume that one can identify consequences of proliferation and attribute them to one of these causes (bibliometrics-based evaluation, cf. page 2, bottom). A better approach is to look at the situation as one of intersecting games that are played by scientists in interaction with other actors (cf. Sindermann 2001). The rules of the games and the strategies of the players determine outcomes (“consequences”).

Looking at ongoing games that are played, I find problematic how the emphasis is on quantitative evaluation of science as the driver, cf. the overall thrust of the paper, and explicitly on page 14: “Summing up, the bibliometrics based evaluation has an extremely strong normative function on scientific practices […]: this artificially produces a speed and competition based framework […].”

However, speed and competition were part and parcel of science, already in the 17th century when science emerged as an institution in the Western world (cf. also Hagstrom 1965). Competition for resources, of course, but also for visibility and reputation—with the curious feature of reputation being accorded by competitors, the so-called peers (cf. the analysis of scientific disciplines as reputational organizations by Whitley 1984/2000). It’s not only the scientists who compete and the institutions of science which are geared to competition. Journals (their editors, their publishers) are also competing (this is visible in items on page 8). And by now, national states are competing, wanting to have excellent universities with high scores on Shanghai ranking lists (see Rip 2011 for a view on the future of this game of struggling for excellence).

In the tangle of drivers and conditions at the left side of the flow-diagram, surely competition must be an important, perhaps the most important element. One can also think of secular changes, like the emergence of research funding agencies after 1945, and the attendant move towards project funding. Thus, the “cultivation of science” is split up in manageable (and publishable) bits. Quantitative publication-based evaluation of science may be part of another secular change, linked to the rise of new public management in general.

In these two detours, I have shifted the explicit and implicit argument as presented by Castellani, Pontecorvo and Valente, but their main question, effects on knowledge production, in particular epistemic effects (my preferred term), remains. Castellani, Pontecorvo and Valente are not very explicit here, they just use four categories to cluster respondent’s comments and their observations (‘Publish or perish’, Choosing the Theme, Replicating the results, Lost in the Ocean of Scientific Information). Assorted effects are mentioned, some more important than others, but there is no overall view. Readers can just take their pick. This allows me to just pick up, in my conclusion, on aspects I find important, in particular the “struggle for excellence”, the shapes in which it occurs, and the tensions at play.

On “Excellence” and Epistemic Effects

I start with a quote from an interview with Paula Stephan, author of an important book How Economics Shapes Science: “ […] institutions and the scientists who work there can become overly reliant on metrics of prestige rather than actually assessing the quality of the research.” In the quote, it is clear that this is the own doing of scientists and their institutions, not the nefarious effect of external evaluation in terms of publications. It’s important to press that point, because Castellani, Pontecorvo and Valente appear to portray the scientists and scholars as hapless victims (their folk theory). That’s why my comments were more critical than appreciative: I wanted to create openings for having another look at their materials and what could be learned from them.

For the question of epistemic effects, stratification of journals (as good, average, not worthwhile) is an important phenomenon (it’s visible on page, 5 bottom). It is unavoidable in this world, but it transforms the question of recognized quality into a question about which journal a paper was published in. There were always informal assessments of journals, cf. the importance of having a paper in Science or Nature. In some disciplines, lists of A, B and C were agreed upon, and tenure track criteria were formulated in those terms, for example having at least one paper published in an A-level journal every two years. Stratification of journals is now proceduralized in terms of bibliometrically-derived impact factors. Scientists submitting research funding proposals specify the impact factors of the journals they have published in. Journals try to increase their impact factor so as to attract more (and hopefully better) submissions.

The epistemic effects derive from how scientists anticipate on how they can publish their work, as Castellani, Pontecorvo and Valente emphasize in the introduction of their paper, combined with the criteria that high impact factor journals use, or are perceived to use. And from the related stratification of the knowledge reservoirs of science, so that published findings and insights have differential status depending on the journals they were published in.

In short, the struggle for excellence, as such a good thing, is transformed into a struggle to score on certain indicators, like publishing in A-level journals. Scientists are doing that to themselves (competition!), but they are helped by science administrators who like indicators (this is also the conclusion of Aksnes and Rip 2009).

I can imagine that scientists and scholars in Italy, used to relatively protected careers once they were in the system, feel concerned about the administrative violence of being exposed to publication-based evaluations intruding on them. As Castellani, Pontecorvo and Valente’s interview data show (already in the selection they offer us in their paper), much occurs anyhow, independent of the recent advent of publication-based evaluations. Thus, their exercise is interesting, but not necessarily in showing epistemological consequences of bibliometrics-based evaluations. There’s more work to do.

References

Aksnes Dag W. and Arie Rip. “Researchers’ Perceptions of Citations.” Research Policy 38, no 6 (2009): 895-905.

Anderson, Kent. “Interview with Paula Stephan — Economics, Science, and Doing Better.” the scholarly kitchen.11 April 2012. http://scholarlykitchen.sspnet.org/2012/04/11/interview-with-paula-stephan-economics-science-and-doing-better/

Castellani, Tommaso, Emanuele Pontecorvo and Adriana Valente. “Epistemological Consequences of Bibliometrics: Insights from the Scientific Community.” Social Epistemology Review and Reply Collective 3, no. 11 (2014): 1-20.

Hagstrom, Waren O. The Scientific Community. New York: Basic Books, 1965.

Rip, Arie. “Science Institutions and Grand Challenges of Society: A Scenario.” Asian Research Policy 2, no. 1 (2011): 1-9.

Simmerman, Carl J. Winning the Game Scientists Play (Revised Edition). New York: Basic Books, 2001.

Whitley, Richard The Intellectual and Social Organisation of the Sciences, 2nd edition. Oxford University Press, 2000/1984.



Categories: Critical Replies

Tags: , , , , , , ,

3 replies

Trackbacks

  1. Unspoken Complicity: Further Comments on Castellani, Pontecorvo and Valente and Rip, Tereza Stöckelová « Social Epistemology Review and Reply Collective
  2. Epistemic Consequences of Bibliometric Evaluation: A Reply to Rip and Stöckelová, Tommaso Castellani, Emanuele Pontecorvo and Adriana Valente « Social Epistemology Review and Reply Collective

Leave a Reply