I would like to thank Danielle DeVasto for her careful discussion of my (2019) reconstruction and for putting it into the broader context of research on the L’Aquila 2009 case. Before coming to the above outlined discussion of overflow, its role, and how we should account for it, let me briefly reply to two other important points made in her comments … [please read below the rest of the article].
Feldbacher-Escamilla, Christian J. 2020. “Overflow, Expertise, and the L’Aquila Case.” Social Epistemology Review and Reply Collective 9 (3): 25-33. https://wp.me/p1Bfg0-4TD.
The PDF of the article gives specific page numbers.
- DeVasto, Danielle. 2020. “Exigency and Overflow in the L’Aquila Case.” Social Epistemology Review and Reply Collective 9 (1): 8–11.
- Feldbacher-Escamilla, Christian J. 2019. “A Rational Reconstruction of the L’Aquila Case. How Non-Denial Turns into Acceptance.” Social Epistemology 33 (6): 503–513.
Abstract
In her “Exigency and Overflow in the L’Aquila Case” (2020), Danielle DeVasto comments on my (2019) rational reconstruction of the circumstances of the L’Aquila earthquake from 2009. In earlier work, DeVasto et al. (2016) have described the case as a case of so-called “overflow”, where matters of fact turn into matters of concern. In her comment, she points out that her understanding of the role of overflow—in the L’Aquila case but also more generally—differs from the way I address it in my reconstruction. In this reply, I provide more details about this difference and argue that it is due to different background assumptions regarding the value-neutrality or value-ladenness of science. However, I will also argue that strong versions of these background assumptions are hard to maintain and that a weakening of them brings our accounts of overflow closer to each other.
General Reply to Devasto’s (2020) Comment
Beforehand, I would like to thank Danielle DeVasto for her careful discussion of my (2019) reconstruction and for putting it into the broader context of research on the L’Aquila 2009 case. Before coming to the above outlined discussion of overflow, its role, and how we should account for it, let me briefly reply to two other important points made in her comments.
First, there is of course the question of why offering a rational reconstruction of a case that took place ten years earlier, in 2009 (cf. DeVasto 2020, 8). This includes in particular also the question of how this contributes to the multitude of publications on this topic that arose already over the years. Regarding the latter, I think that my (2019) reconstruction puts more focus on details of the minutes and documents of the trials, and that it also links the case better to the general discussion of value-neutrality in the philosophy of science than most of the other contributions do so far. This does not mean that this is a shortcoming of other contributions. Rather, it simply means that other contributions focused on other important aspects, such as overall bad policy making (cf. Alexander 2018), scientific citizenship (cf. Pietrucci and Ceccarelli 2019), overflow (cf. DeVasto et al. 2016) etc.
As outlined in the paper—and as will be discussed a bit more below—the debate of value-neutrality/value-ladenness of science has a very long tradition in the philosophy of science and became only quite recently equipped with investigations of the communicative role of scientists in science-related public contexts (cf. John 2015). Since the communicative role is generally considered to be also the underlying problem of scientists’ involvement in the L’Aquila case, it seems to be natural and, as the paper argues, also rewarding to link the case to this debate.
Regarding the former question, why to still investigate the case ten years after the earthquake took place, I take it that this case is of historical importance and in this sense the case is timeless. After all, scientists were convicted of manslaughter (in the first trial) for their supposedly failure of providing an adequate scientific assessment of the situation as well as communicating it properly.
The case was compared to another historical case (“From Galileo to the L’Aquila Earthquake: Italian Science on Trial” by Stuart Clark, The Guardian, October 24, 2012, section “Science”—note that the number of publications on Galileo’s trial of 1633 is still growing). And, what is more, the case also recently served (see references above), currently serves (cf. Rossi et al. 2020) and will go on to serve (prediction) as a paradigm case of studying the interaction between science, policy making, and society. Next to this general and quite magniloquent reasons, I should also mention as more profane reasons that the last trial (appeal) ended in November 2015, it took some time until I was able to discuss this case also in detail with a member of the Italian Istituto Nazionale di Geofisica e Vulcanologia (some of its members were directly involved in the trial) as part of a Workshop on this topic, and, of course, many of the materials (particularly that of the trial) are only available in Italian, which slowed down my accessibility of them a lot.
Second, DeVasto rightly points out that claims of mine such as “the scientists’ non-denial of false or unproven hypotheses […] misled the public to read a non-denial as acceptance” are problematic in the sense that they might suggest an active involvement of the scientists in public misinformation (cf. DeVasto 2020, 9). The aim of the paper is to show that scientists’ involvement in the interaction with the public was never active. However, I also think that their role in the interaction with the authorities is harder to judge.
Here are more details: First, I disagree with DeVasto’s claim (9) that the “public never saw statements of ‘non-denial’.” As she acknowledges (10), two scientists were present at the press conference given by the authorities for the media and the general public. And the authorities (to be more specific: De Bernardinis) made statements of reassurance and low seismic risk during this press conference. I take the scientists’ “silence, their failure to voice a correction” as a passive involvement in the situation “which also contributed to the conversion of non-denial into acceptance” (10). However, I have a hard time to see how this can be characterised as “nothing else, [… than] certainly missed opportunities to engage as fellow citizens, as Pietrucci and Ceccarelli (2019) have argued” (10).
Just given the facts about the setup, it seems to me clear that the role of the scientists was that of scientists, and hence there was a scientific duty to correct scientifically unsupported claims. That this role can be ascribed only in hindsight to the scientists, and that this duty is undermined by the fact that the situation was not transparent to the scientists during the press conference, brings in as an alternative instrument for avoiding such cases that of scientist citizenship, very much agreed. Still, it was a case of scientific duty—to the best of our knowledge and given all the evidence, although in hindsight, which makes it a case of faultless failing a general duty on part of the scientists as being scientists.
To describe the difference in terms of the communication situations discussed in the paper, one might say that I consider the press conference to be of the communication type authority-science-public (DPC-CGR-PUBLIC), DeVasto (2020) as well as Pietrucci and Ceccarelli (2019) seem to reconstruct it as one of the type authority-public (DPC-PUBLIC, with the involved members of the CGR as participating in their role as fellow citizens, i.e. as part of the PUBLIC).
I do not want to split hairs about this (it seems to me basically about the question which perspective counts in describing a duty as scientific or civic), particularly because in the result we fully agree that the scientists are blameless—we do so, however, on different grounds. What is more important is that in case of the meeting of the authorities with the scientists that took place before the press conference (communication type: DPC-CGR), the scientists’ involvement in turning a non-denial into acceptance was more active. Although in the appeal trial the scientists’ statements during this meeting were assessed as neutral, their communicative behaviour was less than optimal in order to express such a neutrality. One might phrase the situation by help of the following scheme:
DPC (repeatedly) asks CGR or broaches the issue: A?
CGR replies: B, we can only show B
It is clear that if A (the energy-discharge hypothesis) is not related to/no consequence of B (which is about seismic hazard assessment in general)—as it was in the case of the meeting—than an answer “We can only show B.” pragmatically implies “We cannot show A.” (and also: “We cannot show non-A.”). However, why not simply stating it this way? As Cartlidge (2014) reports, this was also the question stressed by the prosecutor in the appeal trial, Romolo Como, who
pointed out that during the meeting Barberi had asked the other experts what they thought of the energy-discharge idea, but that none of them replied. […] “Why on 31 March did no one dissent, no one jump up out of their seat, no one explain to the other people present of the scientific consensus that that was nonsense and not a positive signal?”
Clearly, scientists interacting with the public and with policy makers know about these kinds of problems, and in general they are careful. It seems also that typically they act according to a maxim of being careful. There is, however, a general lesson one can draw from this case. Roughly put, the maxim needs to be strengthened: When interacting with policy makers and the public, be even more careful!
Overflow and the Value-Neutrality or Value-Ladenness Of Science
I think that the most important point in DeVasto’s (2020) comment concerns our different understanding of the role of overflow—in the L’Aquila case but also more generally. In what follows, I provide more details about this difference and put it into the bigger context of the debate of value-neutrality and value-ladenness of science.
DeVasto et al. (2016) convincingly argued that the case of the L’Aquila earthquake can be very well described in the context of a so-called “overflow”. She characterises “overflow” as “moments of increasing regularity in which unanticipated events cause issues that had been circumscribed as technical concerns to escape those boundaries and extend into the public sphere” (DeVasto 2020, 10). The source of the notion is Callon et al. (2001), who introduce it as follows:
GMOs [i.e. genetically modified organisms], BSE, nuclear waste, mobile phones, the treatment of household waste, asbestos, tobacco, gene therapy, genetic diagnosis each day the list grows longer. It is no good treating each issue separately, as if it is always a case of exceptional events. The opposite is true. These debates are becoming the rule. Everywhere science and technology overflow the bounds of existing frameworks. The wave breaks. Unforeseen effects multiply. They cannot be prevented by markets, any more than by the scientific and political institutions (9).
More generally, an overflow can be also characterised as a situation in which a matter of fact is transformed into a matter of concern (cf. DeVasto et al. 2016, 140). Traditionally, these two matters were kept separated—in the quote above this is marked by the reference to scientific and political institutions. As Callon et al. (2001) stress, our tradition of “delegative democracy” delegates matters of fact or knowledge production to science, and matters of concern or values or preferences to society, politics, and policy making (cf. 134f). If, however, due to overflow these domains can no longer be separated, then our traditional forms of approaching them will also fail. As Callon et al. (2001) claim:
science and technology cannot be managed by the political institutions currently available to us. [Rather,] they must be enriched, expanded, extended, and improved so as to bring about what some call technical democracy, or more precisely in order to make our democracies more able to absorb the debates and controversies aroused by science and technology (9).
So, they suggest instead of a delegative and separated treatment of knowledge production and seriously taking into account concerns, e.g. via different forms of preference aggregation, a more holistic approach, interrelating both matters more with each other. The aim is to achieve a trans-institutional democratic setup of broadly informed, but also actively participating scientists, policy makers, and the public in general. The idea is to replace the “delegative democratic” arrangement by one of “technical democracy”, also called a “dialogic democracy’’ (cf. 134), or a broad and open “hybrid forum” (cf. DeVasto et al. 2016, 135). In such a forum, scientists, policy makers, and the public contribute to all areas, those of matters of fact as well as those of matters of concern.
The distinction between matters of fact and matters of concern and the question whether our treatment of them should be more in a separated way or whether borders should transgress each other is reminiscent to the traditional discussion of value-neutrality or value-ladenness in science: This debate was triggered by Max Weber in the early 20th century by his so-called “value-neutrality postulate” for science, which states that a scientist qua being a scientist is not legitimated to make (non-epistemic categorical) value judgements; in case she still does so, she needs to make explicit that this transgresses her area of expertise (which does not pose an exception to the postulate but simply means that she makes such a statement not in the capacity of being a scientist).
Weber’s take on values in the social sciences quickly became the predominant position for science in general and kept being the dominant position for about half a century, until Richard Rudner’s (1953) and Carl Gustav Hempel’s (1965) arguments of inductive risk posed a serious challenge to the separating model of delegating matters of knowledge to science and matters of concern to the public and policy making.
Roughly put, the idea is that given a decision problem with goal G and different means M1,…,Mn to possibly achieve this goal, science provides expertise (oftentimes probabilistic information) about which of the M1,…,Mn increases the chances to bring about G, whereas the public (or politics in the interest of the public) puts forward G, and policy makers perform the decision to choose a particular Mi in the light of the goal G.
The problem is, however, that already such a highly simplified decision situation as described here and its solution is undertermined in the following sense: G alone does (most of the time) not single out a particular class of means M1,…,Mn, so, already in formulating the set of alternatives, decisions which can be hardly described as being purely scientific take place. Furthermore, also in their evaluation of which means suits the goal best, scientists need to make decisions that are not completely value-neutral (e.g., the decision of which outcomes of an experiment count as significant and which not might have some relevant impact in the final decision)—this is basically Rudner’s argument of inductive risk. Also, given the (probabilistic) information about Mi and the relevance of the Mis for G, it remains scientifically undetermined which particular decision procedure one can and should use—this is particularly stressed in Hempel’s argument of inductive risk. Rudner and Hempel mark an important cornerstone in the debate. The partly heated debate about value-neutrality of science in the so-called “positivism dispute” of the 1960s marks another.
This dispute was between critical rationalists like Karl Popper and adherents of the Frankfurt School like Theodor Adorno and Jürgen Habermas. Whereas Popper was arguing in favour of value-neutrality, particularly Habermas argued for the value-ladenness of science due to the different interests that enter the scientific enterprise already at the fundamental stage of formulating an empirical basis. Finally, the development of the 1980s and 1990s which gave the debate a new spin from a feminist angle, marks another very important cornerstone. The feminist account, which is influentially represented by Helen Longino, stressed the problem of biases in science due to a lack of diversity. This account suggests, e.g., to institutionally entrench a multitude of non-epistemic values into science in order to overcome problems of biases (cf. Longino 1990).
It seems to me that the discussion of overflow can be put in this wider tradition. It stresses that particularly new technological developments and their substantial societal impacts challenge the traditional model of delegative democracy separating the tasks of knowledge production, value aggregation, and decision making. Given this background, the question is how to best account for it.
How to Account for Overflow and the Role of Expertise
Different accounts within the discussion of value-neutrality and value-ladenness of science will give different answers to the question of how to account for overflow, i.e. a case where the boundary of matters of fact and that of matters of concern are transcended. Note that here overflow is considered to be descriptive, whereas the question of how to deal with it is a normative one. The more traditional account of value-neutrality will tend to counter this transcending of boundaries and will aim to make the different roles of agents in the discussion explicit, whereas accounts of value-ladenness will typically have less of a problem with this, because they suggest to transcend these boundaries anyhow (however, this need not be a strict correlation, because defenders of value-ladenness can still distinguish which kinds of values can and should enter the scientific enterprise, and which ones can and should not do so).
In the debate of overflow, most participants seem to take in a value-ladenness stance. So, e.g., Callon et al. (2001) write:
It would be pointless to erect barriers to contain these overflows; they would quickly give way one after the other. First of all we should recognize that these overflows are destructive only if we stubbornly seek to prevent them (9).
and:
Once the overflows are brought out and made explicit, the question is no longer whether or not a solution is good; it is a question of how to integrate the different dimensions of the debate in order to arrive at a ‘‘robust” solution. The opposition between experts and laypersons, between science and politics, is replaced by socio-technical arguments, by scenarios that articulate different kinds of considerations (45).
Related to the case of the L’Aquila earthquake, DeVasto et al. (2016) take in a very similar stance. They highlight that in the face of uncertain situations almost always public requests for information and also those of policy makers will exceed what can be said on the basis of available scientific evidence. The main problem is that:
In case of overflow, when dealing with matters of concern, “the scientists’ ‘‘objective” statements cannot necessarily be divided from value, as much as they may try to do so. […] Even if scientists attempt to restrict their discourse to the objective stases, these statements of fact carry the power of implication that encourages listeners to ‘‘hear scientists making implicit value and policy claims”. For example, toward the end of the meeting, Barberi stated, ‘‘This sequence of seismic events doesn’t predict anything” (Commission, 2009, p. 4). It does not take much effort to sense the value- and policy-level implications nested in such a statement (i.e., If the swarms do not announce anything, then they are not a bad sign; an earthquake is unlikely, therefore action is not necessary at this time) (157).
Normatively speaking, they draw the following consequence:
L’Aquila highlights the sort of conflict that occurs when matters of concern are construed solely as matters of fact. By handling the situation as a matter of fact and not recognizing the overflow into matters of concern, the CGR failed to tell the Aquilani what they wanted to know or include them (158).
and:
Unfortunately, while what the situation in L’Aquila may have required was a hybrid forum, what the Aquilani got was the CGR, and a corrupted version of the CGR at that (140).
and:
Hybrid forums take up issues from different domains, overriding the traditional purification of fact and value, and reconfigure the division between technical and public spheres (158).
Now, I completely agree with a description of the case as one of overflow. However, regarding the normative assessment, I disagree, as DeVasto (2020, 10) correctly recognises: In fact, as I argue in the paper, overflow took place and caused the fatal misunderstanding, but also, I consider the overflow to be an “illegitimate values/fact overflow” (cf. Feldbacher-Escamilla 2019, 508). This, of course, outs me as someone who is more in the tradition of the value-neutrality than the value-ladenness of science. I think that the project of “delegative democracy” is worth upholding, but I also agree, of course, that the problems put forward already in the early debate about the value-neutrality/ladenness of science, their later developments, as well as the problems discussed in relation to overflow ask for a great deal of modification and institutional redesign.
I should, however, also mention that I consider these positions to be adequate only as matters of degree: Given the problems of value-neutrality, to the best of my knowledge nobody upholds the strong form of excluding all non-epistemic values from science. Also, regarding the other end of the spectrum, value-ladeness of science, there are hardly any positions out there propagating as an extreme point of view that science can be equipped with any kind of non-epistemic values. Rather, value-neutrality and value-ladeness comes in degrees, and whether one ends up closer to the one or the other end of the spectrum depends on whether one considers more a delegative vs. an integrative model of science and public decision making as serving best the aims of science, policy making, and society.
In general, what makes me think more in favour of a delegative vs. an integrative model, is the role that expertise plays in decision making. If we want to make the best-informed decisions, we need specialists and experts for particular fields, and hence we need to delegate tasks to different subgroups. Note, this does not exclude that the subgroups themselves need to allow for high enough diversity—on the contrary, I take it to be one of the most important lessons of social epistemological investigations of the last years that diversity is of high epistemic significance. However, I also think that reference to experts restricts the possibility of designing forums that are dialogical or hybrid a lot (which does not mean that I think that such forums are excluded; in recent work I have, e.g., explored the role of EU agencies in this respect, cf. Feldbacher-Escamilla 2020).
To briefly summarise, it is in favour of value-neutrality that if one is interested in an accurate estimation, one needs to reduce sources of noise (lack of expertise). However, I should also mention that in favour of value-ladenness is the fact that in order to address biases, noise (lack of expertise) plays an important role—it is a truism in modelling that noise-production in form of randomisation is the golden standard for resolving deadlocks (biased outcomes).
Now, particularly with respect to the L’Aquila earthquake case, what makes me think more in value-neutral than in value-laden terms, is, that I think the remedy consists not just in allowing for the highest degree of hybridity. Rather, I think that highlighting the different roles (and by this making the way we delegate tasks in our society explicit) is more important. I agree with DeVasto (2020, 10) that, as Pietrucci and Ceccarelli (2019) demonstrate, authorities were “not equipped to detect Barberi’s subtle irony nor process the very technical assessment of the seismic swarm that followed Barberi’s question [as mentioned in the quote above]” (114). However, I also think that just opening up the forum for non-experts does not solve this problem.
DeVasto et al. (2016) suggest, e.g., that an integrative procedure “might have also made it possible to include, or at least consider including, Giuliani and address the appropriateness of his predictions” (159). Discussing Giuliani might have put more emphasis on the topic of the energy-discharge hypothesis. However, it could have also just added some more noise to the discussion, with an even increased potential of conflicts of understanding. Also, Giuliani’s actions triggered the idea of the authorities to start the media movement in the first place (something, he is clearly not to be blamed for, however, he was still involved); and, furthermore, drawing more attention to Giuliani’s predictions regarding the neighbouring city Sulmona might have even caused more harm, as briefly described in my investigation (cf. 507).
Overall, it seems to me that hybridity is not the right remedy for this and many other cases, particularly if it comes at the cost of expertise (again, I think one can uphold this claim and nonetheless value the epistemic significance of diversity). What seems to me more relevant is the role of science communication. More training in order to avoid communicative and argumentative traps would have been more helpful. I think that in an important respect the delegative and the integrative approach overlap, however, namely when they ask for increased communicative competencies of all sides: scientists, policy makers, and the public in the sense that we should aim at making “our democracies more able to absorb the debates and controversies aroused by science and technology” (Callon et al. 2001, 9).
Contact details: Christian J. Feldbacher-Escamilla, Duesseldorf Center for Logic and Philosophy of Science, cj.feldbacher.escamilla@gmail.com
References
Callon, Michel, Pierre Lascoumes, and Yannick Barthe. 2001. Acting in an Uncertain World. An Essay on Technical Democracy. Cambridge, MA: The MIT Press.
Cartlidge, Edwin. 2014. “Appeals Court Overturns Manslaughter Convictions of Six Earthquake Scientists.” Science AAAS. <http://news.sciencemag.org/earth/2014/11/updated-appeals-court-overturns-manslaughter-convictions-six-earthquake-scientists>. Accessed: 2020-03-05.
DeVasto, Danielle. 2020. “Exigency and Overflow in the L’Aquila Case.” Social Epistemology Review and Reply Collective 9 (1): 8–11.
DeVasto, Danielle, S. Scott Graham, and Louise Zamparutti. 2016. “Stasis and Matters of Concern: The Conviction of the L’Aquila Seven.” Journal of Business and Technical Communication 30 (2): 131–164.
Feldbacher-Escamilla, Christian J. 2020. “Knowledge and Values: A Re-Entanglement in Epistemic Regimes.” Science and Public Policy 47 (1): 67–77.
Feldbacher-Escamilla, Christian J. 2019. “A Rational Reconstruction of the L’Aquila Case. How Non-Denial Turns into Acceptance.” Social Epistemology 33 (6): 503–513.
Hempel, Carl G. 1965. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: Free Press.
John, Stephen. 2015. “Inductive Risk and the Contexts of Communication.” Synthese 192 (1): 79–96.
Longino, Helen E. 1990. Science as Social Knowledge. Princeton: Princeton University Press.
Pietrucci, Pamela and Leah Ceccarelli. 2019. “Scientist Citizens: Rhetoric and Responsibility in L’Aquila.” Rhetoric & Public Affairs 22 (1): 95–128.
Rossi, Rodolfo, Valentina Socci, Eleonora Gregori, Dalila Talevi, Alberto Collazzoni, Francesca Pacitti, Paolo Stratta, Alessandro Rossi, and Giorgio Di Lorenzo. 2020. “ResilienCity: Resilience and Psychotic-Like Experiences 10 Years After L’Aquila Earthquake.” Frontiers in Psychiatry 11: 77. doi: 10.3389/fpsyt.2020.00077.
Rudner, Richard. 1953. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1): 1–6.
Categories: Critical Replies
Leave a Reply