Author Information: Tereza Stöckelová, Institute of Sociology, Czech Academy of Sciences, firstname.lastname@example.org
Stöckelová, Tereza. “Unspoken Complicity: Further Comments on Castellani, Pontecorvo and Valente and Rip.” Social Epistemology Review and Reply Collective 4, no. 2 (2015): 17-20.
The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1TK
Please refer to:
- Castellani, Tommaso, Emanuele Pontecorvo and Adriana Valente. “Epistemological Consequences of Bibliometrics: Insights from the Scientific Community.” Social Epistemology Review and Reply Collective 3, no. 11 (2014): 1-20.
- Rip, Arie. “On Epistemic Effects: A Reply to Castellani, Pontecorvo and Valente.” Social Epistemology Review and Reply Collective 4, no.1 (2014): 47-51.
Image credit: Grufnik, via flickr
As academia changes, it is vitally important to reflect on and study these changes empirically. However, while bibliometrics, research assessment exercises and modes of publishing, more generally, constitutes a major aspect of these changes—and many of the unsettling insights offered by the paper under discussion resonate with my own research— the tendencies, as I will argue, are more differentiated, varied and ambiguous than what might be concluded from Castellani, Pontecorvo and Valente’s paper (2014). In my commentary, I will further develop selected points made by Rip (2014). These points will concern methodology, the active role of scientists in the proliferation of measurements in contemporary research systems and the situated nature of practices of valuing academic performance. I will draw upon my, and my colleagues’, research in the Czech Republic (Linková, Stöckelová 2012; Stöckelová 2012, 2014; Felt, Stöckelová 2009; Dvořáčková et al. 2014) where—similarly to Italy—bibliometrics and quantitative research evaluation have started recently to play a major role.
Castellani, Pontecorvo and Valente spoke with a “multi-disciplinary panel” of respondents, six men and six women, including “only experienced scientists, at different stages of their career” (4). Who are these six men and six women meant to represent in the study? On the one hand, there exists a disciplinary diversity suggesting that the study could point to variability across disciplinary contexts. However, the diversity of respondents only seems to be mobilized to support a unified picture of the effects and trends—differences in views, however subtle, are rarely mentioned (7) much less further inquired into and explained. The same is true for gender. Why bother ensuring parity of the sample when no gender aspects are then highlighted? Or, do the authors want to suggest that gender does not at all matter? If so, such a conclusion would merit at least a brief mention as it would contradict findings of a number of studies arguing otherwise (see for example EC 2004; Felt 2009).
On the other hand, there is a relative uniformity of the sample including “only experienced scientists”—the epistemic consequences of which are not sufficiently considered, I believe. Established researchers cannot simply be taken for un-situated interlocutors of their disciplines, or academia as such, just by the sheer fact of their experience. As Linková (2014), for example, empirically elaborated in her ethnography of a Czech research institute in transition, positions in the research system (as well as gender) matter significantly (though, again, not uniformly) for researchers’ attitudes and actual manoeuvring vis-à-vis publishing and research performance criteria. Researchers play out their own games in and with bibliometric evaluations, drawing upon them or taking distance to defend and legitimize their own research and working modes. Yet, researchers do not constitute a homogeneous group. This brings me to the second point I want to stress.
In scientists’ “folk theories” (Rip 2011), bibliometric-based assessment often figures as an external force imposed upon academia by politicians, bureaucrats or “society”. However, as we show for the Czech Republic in Linková and Stöckelová (2012), it was precisely a group of established senior natural scientists who took an initiative in the 1990s to introduce bibliometric research evaluation into the Czech research system—at first, in some of the natural science institutes of the Czech Academy of Sciences and, later, on at the national level. They argued in terms of depoliticizing science (from the Communist legacy) and increasing the quality of Czech science and distributing more effectively the scarce resources within and between academic disciplines. Bibliometrics was mobilized as an impartial arbiter of domestic disputes over quality. It was only later—when the industrial lobby started to successfully influence the parameters of research evaluation at the national level—that the original advocates of bibliometric assessment became critical.
This is not to say that in many contexts, and for many researchers, bibliometrics and the new expectations and conditions of publishing create an environment they have to adjust to, in spite of their doubts and criticism (regarding immature publication, “salami publishing” or the biased logic of topics selection) (Linková 2014). However, as academics, we should be much more reflective regarding our role in the production and reproduction of the logics of performance and “audit cultures” (Strathern 2000) in the current research systems (and beyond in other sectors of public policy). Such an approach would highlight our agency as it is we who—in our different capacities—structure the research landscape for others and for ourselves. Most of us are not only researchers, authors and grant applicants, but also editors, reviewers, funding agency panel members, research managers, policy advisors, public intellectuals and, last but not least, readers and those who cite others’ work. In all of these capacities, we must assume responsibility for contributing to reasonable developments of contemporary science.
I would like to conclude on a personal note and share a perhaps idiosyncratic but, I believe, rather telling experience—at least for the Czech academic environment. In scientists’ “folk theories”, as well as in some of the STS literature, bibliometrics, with its emphasis on a bibliometric research performance, occupies the place of a new hegemony. As I noted in the opening paragraph, bibliometrics is a major logic (re)ordering today’s science. However, it is not the only one. I believe that other values and modes of ordering often associated with the “good old days” of slow, peer-based and autonomous science, are strategically interwoven with this performance logic and mobilized for the governance of today’s academics.
While analysing this situation as a sociologist of science (Stöckelová 2014), I experienced it first-hand when undergoing the process of becoming an associate professor at my faculty (so called “habilitation”). Candidates submit a file with the habilitation thesis, curriculum vitae, publications, citations and teaching record that are all evaluated by a habilitation committee consisting of five members nominated by the Scientific Council of a faculty. On the basis of the submitted file, and three external reviews of the habilitation thesis, the committee issues a statement supporting or rejecting the candidature. Then, the candidate gives a 20-minute lecture in front of the Scientific Council, which is followed by a debate, and the Council votes on the habilitation award (the associate professorship). And here comes the surprise—the vote is secret and does not require any justification. In my case, while all the bibliometric criteria were met amply, and I had an unconditional support of the habilitation committee, and my lecture was positively evaluated, I did not get the absolute majority of votes needed for the habilitation to be awarded (surprisingly, again, absent members of the Council vote, in effect, against the candidate). I was left only to speculate about the reasons.
This process is completely legal according to 1998 University Law. At first glance, this might look as the residue of the “pre-bibliometric” age, but I believe that the unaccountable Scientific Council judgement actually works as a welcome instrument for the current academic establishment that conveniently supplements the bibliometric and evaluation tools. It gives them a wider register of possibilities to legally manage academic staff where bibliometric performance can be sometimes mobilized as an ultimate, objective measure of performance; but, equally, at other times, it may play no role at all. Notably, the current amendment to the University Law submitted to the Parliament does not change the habilitation procedure. This Czech case might be extreme, but I believe that it points to a more widespread mechanism of governance in which the “trust in numbers” (Porter 1995) constitutes but one element which can be mobilized or demobilized to achieve particular goals. We should attend to bibliometrics and its effect on knowledge production; however, our attention should also remain on other logics and tools governing contemporary academia with which bibliometrics might easily lend a hand.
Castellani, Tommaso, Emanuele Pontecorvo and Adriana Valente. “Epistemological Consequences of Bibliometrics: Insights from the Scientific Community.” Social Epistemology Review and Reply Collective 3, no. 11 (2014): 1-20.
Dvořáčková, Jana, Petr Pabian, Simon Smith, Tereza Stöckelová, Karel Šima and Tereza Virtová. Politika a každodennost na českých vysokých školách: Etnografické pohledy na vzdělávání a výzkum [Politics and everyday life in Czech universities: Ethnographic perspectives on teaching, learning and research]. Praha: Sociologické nakladatelství, 2014.
European Commission. Gender and Excellence in the Making. Luxemburg: Office for Official Publications of the European Communities, 2004.
Felt, Ulrike and Tereza Stöckelová. “Modes of Ordering and Boundaries that Matter in Academic Knowledge Production“. In Knowing and Living in academic research: Convergence and heterogeneity in research cultures in the European context, 41-124. U. Felt (ed.). Prague: Institute of Sociology of the Academy of Sciences of the Czech Republic, 2009.
Linková, Marcela. Disciplining Science: The impacts of shifting governmentality regimes on academic research in the natural sciences in the Czech Republic. Doctoral dissertation. Prague: Faculty of Social Sciences, Charles University, 2014.
Linková, Marcela. “Unable to resist: Researchers’ responses to research assessment in the Czech Republic.” Human Affairs: Postdisciplinary Humanities & Social Sciences Quarterly 24, no 1 (2014): 78-88.
Linková, Marcela and Tereza Stöckelová. “Public accountability and the politicization of science: The peculiar journey of Czech research assessment”. Science & Public Policy 39, no. 5 (2012): 618-629.
Porter, Theodore M. Trust in Numbers. The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press, 1995.Rip, Arie. “Science Institutions and Grand Challenges of Society: A Scenario.” Asian Research Policy 2, no. 1 (2011): 1-9.
Stöckelová, Tereza. “Immutable mobiles derailed: STS and the epistemic geopolitics of research assessment”. Science, Technology & Human Values 37, no. 2 (2012): 286-311.
Stöckelová, Tereza. “Power at the Interfaces: The Contested Orderings of Academic Presents and Futures in a Social Science Department”. Higher Education Policy 27 (2014): 435–451.
Strathern, Marilyn, ed. Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. London, New York: Routledge, 2000.
Categories: Critical Replies
Leave a Reply