Archives For epistemic accountability

Author Information: Nadja El Kassar, Swiss Federal Institute of Technology, nadja.elkassar@gess.ethz.ch

El Kassar, Nadja. “The Irreducibility of Ignorance: A Reply to Peels.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 31-38.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-46K

Image by dima barsky via Flickr / Creative Commons

 

This article responds to critiques of El Kassar, Nadja (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance.” Social Epistemology. DOI: 10.1080/02691728.2018.1518498.

Including Peels, Rik. “Exploring the Boundaries of Ignorance: Its Nature and Accidental Features.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 10-18.

Thanks to Rik Peels for his thought-provoking comments which give me the opportunity to say more about the arguments and rationale of my article and the integrated conception. 

General Remarks About Two Approaches to Ignorance

Rik Peels’ and Patrick Bondy’s replies allow me to highlight and distinguish two approaches to ignorance, one that focuses on ignorance as a simple doxastic (propositional) phenomenon and another that regards ignorance as a complex epistemological phenomenon that is constituted by a doxastic component and by other epistemic components. The distinction can be illustrated by Peels’ conception and my integrated conception of ignorance proposed in the article. Peels’ conception belongs with the first approach, my conception belongs with the second approach.

As Peels’ reply also evinces, the two approaches come with different assumptions and consequences. For example, the first approach presupposes that accounts of knowledge and ignorance are symmetrical and/or mirror each other and, consequently, it will expect that an account of ignorance has the same features as an account of knowledge.

In contrast, the second approach takes ignorance to be a topic in its own right and therefore it is not concerned by criticism that points out that its account of ignorance makes claims that an account of knowledge does not make or does not fit with an account of knowledge in other ways. I will return to this distinction later. A major concern of the second approach (and my conception shares this concern) is to develop an account of ignorance sui generis, not an account of ignorance in the light of knowledge (or accounts of knowledge).

Why Peels’ Attempt at Reducing the Integrated Conception to His View Fails

Peels argues that the integrated conception of ignorance boils down to the conception of ignorance that he endorses. However, if I understand his considerations and arguments correctly, the observations can either be accommodated by my conception or my conception can give reasons for rejecting Peels’ assumptions. Let me discuss the three central steps in turn.

Doxastic Attitudes in the Second Conjunct

As a first step Peels notes that “the reference to doxastic attitudes in the second conjunct is … redundant” (Peels 2019, 11) since holding a false belief or holding no belief, the manifestations of ignorance, just are doxastic attitudes. But the doxastic attitudes in the second conjunct are not redundant because they capture second-order (and in general higher-order) attitudes towards ignorance, e.g. I am ignorant of the rules of Japanese grammar and I (truly) believe that I do not know these rules.

Socratic ignorance also includes more doxastic attitudes than those at the first level of ignorance. Those doxastic attitudes can also constitute ignorance. Peels’ observation indicates that it might be advisable for me to talk of meta-attitudes rather than doxastic attitudes to avoid confusion about the double appearance of doxastic attitudes.

Epistemic Virtues and Vices and the Nature of Ignorance

Peels’ second step concerns the other two components of the second conjunct. Epistemic virtues and vices “[do] not belong to the essence of being ignorant” (Peels 2019, 11). But I do not see what reasons Peels has for this claim. My arguments for saying that they do belong to the nature of being ignorant from the original article are still valid. One does not capture ignorance by focusing only on the doxastic component.

This is what my example of Kate and Hannah who are ignorant of the fact that cruise ships produce high emissions of carbon and sulfur dioxides but have different epistemic attitudes towards not knowing this fact and thus are ignorant in different ways is meant to show. Their being ignorant is not just determined by the doxastic component but also by their attitudes.

This does not mean that all ignorance comes with closed-mindedness or open-mindedness, it just means that all states of ignorance are constituted by a doxastic component and an attitudinal component (whichever attitudes fills that spot and whether it is implicit or explicit is an open question and depends on the relevant instance of ignorance). I am interested to hear which additional reasons Peels has for cutting epistemic virtues and vices from the second conjunct and delineating the nature of ignorance in the way that he does.

Ignorance as a Disposition?

Peels’ third step consists in a number of questions about ignorance as a disposition. He writes:

“[O]n the El Kassar synthesis, ignorance is a disposition that manifests itself in a number of dispositions (beliefs, lack of belief, virtues, vices). What sort of thing is ignorance if it is a disposition to manifest certain dispositions? It seems if one is disposed to manifest certain dispositions, one simply has those dispositions and will, therefore, manifest them in the relevant circumstances.” (Peels 2019, 12, emphasis in original).

These questions seem to indicate to Peels that the dispositional character of ignorance on the integrated conception is unclear and therefore disposition may be removed from the integrated conception. It does not make sense to say that ignorance is a disposition.

But Peels’ questions and conclusion themselves invite a number of questions and, therefore, I do not see how anything problematic follows for my conception. It is not clear to me whether Peels is worried because my conception implies that a disposition is manifested in another disposition that may be manifested or not, or whether he is concerned because my conception implies that one disposition (in the present context: ignorance) may have different stimulus conditions and different manifestations.

In reply to the first worry I can confirm that I think that it is possible that a disposition can be manifested in other dispositions. But I do not see why this is a problem. An example may help undergird my claim. Think e.g. of the disposition to act courageously, it is constituted at minimum by the disposition to take action when necessary and to feel as is appropriate. Aristotle’s description of the courageous person reveals how complicated the virtue is and that it consists in a number of dispositions:

Now the brave man is as dauntless as man may be. Therefore, while he will fear even the things that are not beyond human strength, he will fear them as he ought and as reason directs, and he will face them for the sake of what is noble; for this is the end of excellence. But it is possible to fear these more, or less, and again to fear things that are not terrible as if they were.

Of the faults that are committed one consists in fearing what one should not, another in fearing as we should not, another in fearing when we should not, and so on; and so too with respect to the things that inspire confidence. The man, then, who faces and who fears the right things and with the right aim, in the right way and at the right time, and who feels confidence under the corresponding conditions, is brave; for the brave man feels and acts according to the merits of the case and in whatever way reason directs. (Nicomachean Ethics, 1115b 17-22)

The fact that courage consists in other dispositions also explains why there are many ways to not be virtuously courageous. For the present context all that matters is that a disposition can consist in other dispositions that can be manifested or not.

The second worry might be alleviated by introducing the notion of multi-track dispositions into my argument. A multi-track disposition, a term widely acknowledged in philosophical work on disposition, is individuated by several pairs of stimulus conditions and manifestations (Vetter 2015, 34). Thus, ignorance as a disposition may be spelled out as a multi-track disposition that has different stimulus conditions and different manifestations.

Peels also argues against the view that epistemic virtues themselves are manifestations of ignorance. But I do not hold that epistemic virtues simpliciter are manifestations of ignorance, rather I submit that epistemic virtues (or vices) necessarily appear in manifestations of ignorance, they co-constitute ignorance.

Enveloped in Peels’ argument is another objection, namely, that epistemic virtues cannot appear in manifestations of ignorance, it is only epistemic vices that can be manifestations of ignorance – or as I would say: can appear in manifestations of ignorance. Peels claims that, “open-mindedness, thoroughness, and intellectual perseverance are clearly not manifestations of ignorance. If anything, they are the opposite: manifestations of knowledge, insight and understanding.” (Peels 2019, 12, emphasis in original)

Let me address this concern by explaining how ignorance can be related to epistemic virtues. Being open-mindedly ignorant, and being ignorant in an intellectually persevering way become more plausible forms and instantiations of ignorance if one recognizes the significance of ignorance in scientific research. Think, e.g., of a scientist who wants to find out how Earth was formed does not know how Earth was formed and she may dedicate her whole life to answering that question and will persist in the face of challenges and setbacks.

Similarly, for a scientist who wants to improve existing therapies for cancer and sets out to develop nanotechnological devices to support clinicians. She can be open-mindedly ignorant about the details of the new device. In fact, most scientists are probably open-mindedly ignorant; they do want to know more about what it is they do not know in their field and are after more evidence and insights. That is also one reason for conducting experiments etc. Firestein (2012) and several contributions in Gross and McGoey’s Routledge Handbook of Ignorance Studies (2015) discuss this connection in more detail.

Thus, Peels’ third step also does not succeed and as it stands the integrated conception thus does not reduce to Peels’ view. But I’d be interested to hear more about why such a revision of the integrated conception suggests itself.

Correction Concerning “How One Is Ignorant”

Let me address a cause of confusion in the integrated conception. When I call for an account of ignorance to explain “how one is ignorant”, I do not want the account to explain how one has become ignorant, i.e. provide a genetic or causal story of a particular state of ignorance. This assumption leads Peels and Bondy to their objections concerning causal components in my conception of the nature of ignorance.

Instead, what I require, is for an account of ignorance to capture what one’s ignorance is like, what epistemic attitudes the subject has towards the doxastic component of her ignorance. The confusion and the fundamental objections to the integrated conception may be explained by the different approaches of ignorance that I have mentioned at the start of my reply.

No Mirroring Nor Symmetry Required

Peels notes that theories of knowledge do not include a causal story of how the subject became knowledgeable, nor about the quality of the subject’s knowledge, and from this he concludes that the integrated conception of ignorance which he takes to provide such a causal story must be rejected. However, as it stands, his argument is not conclusive.

First, it builds on confusion about the claims of the integrated conception that I have addressed in the previous section (3): the integrated conception does not provide a causal story for how the subject became ignorant, nor does it claim that such a causal story should be part of an account of the nature of ignorance. Rather, it spells out which additional features of ignorance are also constitutive – namely, an epistemic attitude – in addition to the doxastic component accepted by everyone.

Second, it is unclear why theories of knowledge and theories of ignorance have to presuppose a current-time slice approach, as effectively endorsed by Peels. Some theories of knowledge want to distinguish lucky true belief from knowledge and therefore look at the causal history of the subject coming to their true belief and therefore reject current-time slice approaches (e.g. Goldman 2012).

Third, Peels’ objection presupposes that theories of knowledge and theories of ignorance have to contain the same constituents and features or have to be symmetrical or have to mirror each other in some way, but I do not see why these presuppositions hold. Knowledge and ignorance are obviously intimately connected but I am curious to hear further arguments for why their accounts have to be unified or symmetrical or mirrored.

The Distinction Between Necessary and Contingent or Accidental Features of Ignorance

Peels argues that my conception confuses necessary and contingent or accidental features of ignorance but it is not clear what reasons Peels can give to support his diagnosis. My conception specifically distinguishes necessary components of ignorance and contingent/accidental instantiations of a necessary component of ignorance.

Peels’ discussion of my example of Kate and Hannah who both do not know that cruise ships have bad effects for the environment seems to jumble necessary features of ignorance whose instantiation is contingent (e.g. open-mindedness instantiates the epistemic attitude-component in open-minded ignorance) and contingent features of ignorance that trace back the causal history of an instance of ignorance. Peels writes:

Hannah is deeply and willingly ignorant about the high emissions of both carbon and sulfur dioxides of cruise ships (I recently found out that a single cruise trip has roughly the same amount of emission as seven million cars in an average year combined). Kate is much more open-minded, but has simply never considered the issue in any detail. She is in a state of suspending ignorance regarding the emission of cruise ships.

I reply that they are both ignorant, at least propositionally ignorant, but that their ignorance has different, contingent features: Hannah’s ignorance is deep ignorance, Kate’s ignorance is suspending ignorance, Hannah’s ignorance is willing or intentional, Kate’s ignorance is not. These are among the contingent features of ignorance; both are ignorant and, therefore, meet the criteria that I laid out for the nature of ignorance. (Peels 2019, 16-17)

Hannah’s and Kate’s particular epistemic attitudes are (to some extent) contingent but the fact that ignorance consists in a doxastic component and an attitudinal component is not contingent but necessary. In other words: which epistemic attitude is instantiated is accidental, but that there is an epistemic attitude present is not accidental but necessary. That is what the integrated conception holds. I’m interested to hear more about Peels’ argument for the opposing claim in the light of these clarifications.

Being Constitutive and Being Causal

Peels’ argumentation seems to presuppose that something that is constitutive of a state or disposition cannot also be causal, but it is not clear why that should be the case. E.g. Elzinga (2018) argues that epistemic self-confidence is constitutive of intellectual autonomy and at the same time may causally contribute to intellectual autonomy.

And note also that a constitutive relation between dispositions does not have to entail a causal relation in the sense of an efficient cause. Some authors in Action Theory argue that a disposition is not the cause of an action; rather, a decision, motivation, desire (etc.) is the cause of the action (cf. Löwenstein 2017, 85-86). I do not want to take sides on this issue, this is just to point out that Peels’ approach to something being constitutive and being a cause is not straightforward. (See also Section 4 in my upcoming reply to Patrick Bondy.)

Other Forms of Ignorance

Peels notes that my approach does not capture objectual and procedural ignorance as spelled out by Nottelmann (e.g. Nottelmann 2016). He tries to show that the integrated conception does not work for lack of know-how: “not knowing how to ride a bike does not seem to come with certain intellectual virtues or vices” (Peels 2019, 13) nor for lack of objectual ignorance: “if I am not familiar with the smell of fresh raspberries, that does not imply any false beliefs or absence of beliefs, nor does it come with intellectual virtues or vices” (Peels 2019, 13).

I am glad that Peels picks out this gap in the article, as does Bondy. It is an important and stimulating open question how the integrated conception fits with such other forms of ignorance – I am open-mindedly ignorant with respect to its answers. But the article did not set out to give an all-encompassing account of ignorance. Nor is it clear, whether one account will work for all forms of ignorance (viz. propositional ignorance, objectual ignorance, technical/procedural ignorance). Peels’ observation thus highlights an important open question for all theories of ignorance but not a particular objection against my integrated conception.

At the same time, I am skeptical whether Peels’ proposed account, the threefold synthesis, succeeds at capturing objectual and procedural ignorance. I do not see how the threefold synthesis is informative regarding objectual and procedural ignorance since it just states that objectual ignorance is “lack of objectual knowledge” and procedural ignorance is “lack of procedural knowledge”. Peels’ formulates the Threefold Synthesis as follows, with an additional footnote:

Threefold Synthesis: Ignorance is an epistemic agent’s lack of propositional knowledge or lack of true belief, lack of objectual knowledge, or lack of procedural knowledge.9

9If the Standard View on Ignorance is correct, then one could simply replace this with: Ignorance is a disposition of an epistemic agent that manifests itself in lack of (propositional, objectual, or procedural) knowledge. (Peels 2019, 13)

I do not see how these statements go toward capturing lack of competence, e.g. not possessing the competence to ski, or lack of objectual knowledge, e.g. not knowing Paris. I guess that philosophers interested in ignorance and in this issue will have to carefully study the phenomena that they want to capture and their interrelations – as Bondy starts to do in his Reply (Bondy 2018, 12-14) – in order to set out to adequately capture what Peels calls lack of objectual knowledge or lack of procedural knowledge.

What Does One Want From an Account of Ignorance?

Peels’ reply evinces that anyone who wants to develop an account of ignorance needs to answer a number of fundamental questions, including: What is it that we want from an account of ignorance? Do we want it a unified account for knowledge and ignorance? Do we want a simple account? Or do we want to adequately capture the phenomenon and be able to explain its significance in epistemic practices of epistemic agents? I want the account to be able to do the latter and have therefore put forward the integrated conception.

Two Clarificatory Remarks

In closing, I would like to add two clarificatory remarks. Peels suggests that the structural conception and agnotology are identical conceptions or approaches (Peels 2019, 15-16). But even though there are signficant connections between the structural conception and agnotology, they are distinct.

The examples for the structural conception in my article are from feminist epistemology of ignorance, not from agnotology. I do not want to engage in labelling and including or excluding authors and their works from fields and disciplines, but there are differences between works in epistemology of ignorance and agnotology since agnotology is often taken to belong with history of science. I would not want to simply identify them.

I do not see how Peels’ observations that the examples for agential conceptions of ignorance include causal language and that the conception of ignorance that he finds in critical race theory does not fit with someone being ignorant “of the fact that Antarctica is the largest desert on earth” (Peels 2019, 14) present objections to the integrated conception.

If there are claims about the causes of ignorance in these theories, that does not mean that my conception, which is distinct from these conceptions, makes the same claims. I specifically develop a new conception because of the advantages and disadvantages of the different conceptions that I discuss in the article.[1]

Contact details: nadja.elkassar@gess.ethz.ch

References

Aristoteles. 1995. The Complete Works of Aristotle: The Revised Oxford Translation. Volume Two. Edited by Jonathan Barnes. Princeton, NJ: Princeton Univ. Press.

Bondy, Patrick. 2018. “Knowledge and Ignorance, Theoretical and Practical.” Social Epistemology Review and Reply Collective 7 (12): 9–14.

Elzinga, Benjamin. 2019. “A Relational Account of Intellectual Autonomy.” Canadian Journal of Philosophy 49 (1): 22–47. https://doi.org/10.1080/00455091.2018.1533369.

Firestein, Stuart. 2012. Ignorance: How It Drives Science. Oxford, New York: Oxford University Press.

Goldman, Alvin I. 2012. Reliabilism and Contemporary Epistemology: Essays. New York, NY.: Oxford Univ. Press.

Gross, Matthias, and Linsey McGoey, eds. 2015. Routledge International Handbook of Ignorance Studies. Routledge International Handbooks. London ; New York: Routledge, Taylor & Francis Group.

Löwenstein, David. 2017. Know-How as Competence: A Rylean Responsibilist Account. Studies in Theoretical Philosophy, vol. 4. Frankfurt am Main: Vittorio Klostermann.

Nottelmann, Nikolaj. 2016. “The Varieties of Ignorance.” In The Epistemic Dimensions of Ignorance, edited by Rik Peels and Martijn Blaauw, 33–56. Cambridge: Cambridge University Press. https://doi.org/10.1017/9780511820076.003.

Peels, Rik. 2019. “Exploring the Boundaries of Ignorance: Its Nature and Accidental Features.” Social Epistemology Review and Reply Collective 8 (1): 10–18.

Vetter, Barbara. 2015. Potentiality: From Dispositions to Modality. Oxford: Oxford University Press.

[1] Thanks to David Löwenstein and Lutz Wingert for helpful discussions.

Author Information: Eric Kerr, National University of Singapore, eric.kerr@nus.edu.sg.

Kerr, Eric. “On Thinking With Catastrophic Times.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 46-49.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45Q

Image by Jeff Krause via Flickr / Creative Commons

 

Reprinted with permission from the Singapore Review of Books. The original review can be found here.

• • • •

On Thinking With – Scientists, Sciences, and Isabelle Stengers is the transcription of a talk read by Jeremy Fernando at the Centre for Science & Innovation Studies at UC Davis in 2015. The text certainly has the character of a reading: through closely attending to Stengers’ similarly transcribed talk (2012) Fernando traverses far-reaching themes – testimony, the gift, naming, listening – drawing them into a world made strange again through Stengers’ idea of “thinking with” – as opposed to analyzing or evaluating – notions of scientific progress, justice, and responsibility.

All this will make this review rather different from convention. I’ll attempt a response, using the text as an opportunity to pause, regroup, and divert, which, I hope, will allow us to see some of the connections between the two scholars and the value of this book. I read this text as a philosopher within Science and Technology Studies (STS) and through these lenses I’ll aim to draw out some of the ideas elaborated in Fernando’s essay and in Stengers’ In Catastrophic Times.

Elusive Knowledge

Towards the end of the essay, Fernando muses on the elusive nature of knowledge: “[T]he moment the community of scientists knows – or thinks it knows – what Science is, the community itself dissolves” (p.35). He consequently ties epistemological certainty to the stagnation, or even the collapse, of a scientific community.

In this sense, Fernando suggests that the scientific community should be thought of as a myth, but a necessary one. He implies that any scientific community is a “dream community… a dream in the sense of something unknown, something slightly beyond the boundaries, binds, of what is known.” (pp. 35-36) Further, he agrees with Stengers: “I vitally need such a dream, such a story which never happened.” So why? What is this dream that is needed?

Stengers suggests that we are now in a situation where there are “many manners of dying” (2015, p. 9). Any attempt on “our” part to resolve the growing crisis, seems to merely entrench and legislate the same processes that produced the very problems we were trying to overcome. International agreements are framed within the problematic capitalocene rather than challenging it. Problems arrive with the overwhelming sense that our current situation is permanent, political change is inertial or even immovable, and that the only available remedy is more of the poison. Crucially, for Stengers, this sense is deliberately manufactured – an induced ignorance (ibid. p. 118).

Stengers’ concern, which Fernando endorses, is to reframe the manner in which problems are presented. To remove us from the false binary choice presented to us: as precaution or pro-action, as self-denial of consumer products or geoengineering, as deforestation for profit or financialization of forests. For his part, Fernando does not offer more solutions. Instead, he encourages us to sit in the mire of the problem, to revisit it, to rethink it, to re-view it. Not as an act of idle pontification but for what Stengers calls “paying attention” (ibid. p. 100).

Paying Attention to Catastrophic Times

In order to pay attention, Fernando begins with a parental metaphor: Gaia as mother, scientific authority as father. For him, there is an important distinction between power and authority. Whereas power can be found in all relations, authority “is mystical, divine, outside the realm of human consciousness – it is the order of the sovereign. One either has authority or one doesn’t” (p.21).

Consequently, there is something unattainable about any claim to scientific expertise. The idea that authority depends on a mystical or theological grounding chimes with core epistemological commitments in STS, most forcefully advocated by David Bloor who argued that the absolutist about knowledge would require “epistemic grace”.

Alongside Fernando’s words, Burdock details gooey, veiny appendages emerging from pipes and valves, tumours and pustules evoking the diseased body. Science and engineering are productive of vulnerable bodies. Here we might want to return to Stengers’ treatment of the pharmakon, the remedy/poison duality.

For Stengers, following Nietzsche’s gay scientist (whom Fernando also evokes), skepticism and doubt are pharmakon (Nietzsche 1924, p. 159). She details how warnings as to the dangers of potential responses are presented as objections. STS scholars will note that this uncertainty can be activated by both your enemies and your friends, not least when it comes to the challenges of climate change. This is the realization that prompted Bruno Latour to issue what Steve Fuller has called a “mea culpa on behalf of STS” for embracing too much uncertainty (Latour 2004; Fuller 2018, p. 44).

Data and Gaia

Although there is little mention of any specific sciences, scientific instruments, theories or texts, Fernando instead focuses on what is perhaps the primary object of contemporary science – data – especially its relation to memory. It is perhaps not a coincidence that he repeatedly asks us to remember not to forget: e.g. “we should try not to forget that…” (p. 11 and similar on p. 17, 22, 21, and 37). He notes that testimony occurs through memory but that this is, generally speaking, unreliable and incomplete. His conclusion is Cartesian: perhaps the only thing we can know for sure is that we are testifying (p. 16).

Stengers picks up the question of memory in her dismissal of an interventionist Gaia (to paraphrase Nick Cave) denying that Gaia could remember, could be offended or could care who is responsible (2015 p.46 and fn. 2). She criticizes James Lovelock, the author of the Gaia hypothesis, for speaking of Gaia’s “revenge”. While he begins his text with Stengers’ controversial allusion to Gaia, Fernando’s discussion of data also has a curious connection to a living, self-regulating (and consequently also possibly a vulnerable) globe.

Riffing on Stewart Brand’s infamous phrase, “information wants to be free,” Fernando writes, “[D]ata and sharing have always been in relation with each other, data has always already been open source. Which also means that data – sharing, transference – always entail an openness to the possibility of another; along with the potentiality for disruption, infection, viruses, distortion” (p.22). Coincidentally, along with being an internet pioneer, founding one of the oldest virtual (and certainly mythological) communities, Brand is an old friend of Lovelock.

Considering these words in relation to impending ecological disaster, I’m inclined to think that perhaps the central myth that we should try to escape is that we don’t easily forget. Bernard Stiegler has suggested that we are in a period of realignment in our relationship to memory in which external memory supports are the primary means by which we understand our temporality (2011, 2013).

Similarly, we might think that it is no coincidence that when Andy Clark and David Chalmers proposed their hypothesis of extended cognition, the idea that our cognitive and memorial processes extend into artefacts, they reached for the Alzheimer’s sufferer as “Patient Zero” (1998). In truth, we do forget, often. And this is despite, and sometimes even because of, our best efforts to record and archive and remember.

Fernando’s writing is, at root, a call to re-call. It regenerates other texts and seems to live with them such that they both thrive. The “tales” he calls for spiral out into new mutations like Burdock’s tentacular images. But to reduce Fernando’s scope to simply a call for other perspectives would be to sell it short. Read alongside In Catastrophic Times, the call to embrace uncertainty and to reckon with it becomes more urgent.

Fernando reminds us of our own forgetfulness and the unreliability of our testimony about ourselves and our communities. For those of us wrestling with the post-truth world, Fernando’s essay is both a palliative and, potentially, charts a way out of no-alternative thinking.

Contact details: eric.kerr@nus.edu.sg

References

Bloor, D. 2007. Epistemic grace: Antirelativism as theology in disguise. Common knowledge 13: 250-280.

Clark, A. and D. Chalmers. 1998. The extended mind. Analysis 58: 7–19.

Fuller, S. 2018. Post-Truth: Knowledge as a Power Game. Anthem Press.

Latour, B. 2004. Why Has Critique Run out of Steam?  From Matters of Fact to Matters of Concern Critical Inquiry 2004 30(2).

Nietzsche, F. 1924. The Joyful Wisdom (trans. T. Common) New York: The MacMillan Company. Accessed 10 June 2018. https://ia600300.us.archive.org/9/items/completenietasch10nietuoft/completenietasch10nietuoft.pdf.

Stengers, I. 2012. “Cosmopolitics: Learning to Think with Sciences, Peoples and Natures.” Public lecture. Situating Science Knowledge Cluster. St. Marys, Halifax, Canada, 5 March 2012. Accessed 10 June 2018. http://www.youtube.com/watch?v=-ASGwo02rh8.

Stengers, I. 2015. In Catastrophic Times: Resisting the coming Barbarism. Open Humanities Press/Meson Press.

Stiegler, B. 2011. Technics and Time, 3: Cinematic Time and the Question of Malaise (trans. R. Beardsworth and G. Collins). Stanford: Stanford University Press.

Stiegler, B. 2013. For a New Critique of Political Economy (trans. D. Ross). Cambridge: Polity.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45p

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons

 

In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details: rsassowe@uccs.edu

References

Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Adam Morton, University of British Columbia, adam.morton@ubc.ca.

Morton, Adam. “Could It Be a Conditional?” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 28-30.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-41M

Image by Squiddles via Flickr / Creative Commons

 

Chris Tweedt proposes that there is no independent concept of contrastive knowledge. He allows that we can meaningfully and in fact helpfully say that a person knows that p rather than q. But this is shorthand for something that can be said in a more traditional way as that the person knows that if p or q then p. I have two worries about this line. First, I do not know how to understand the conditional here. And second, I suspect that the suggested interpretation takes away the motive for using a contrastive idiom in the first place.

What Kind of Conditional?

So, could “Sophia knows that it is a goldfinch rather than a canary” mean “Sophia knows that if it is a goldfinch or a canary then it is a goldfinch”? What could “if” mean for this to be plausible? The simplest possibility is that it is a material conditional. But this cannot be right.

Sophia, who knows very little about small birds, sees an eagle land on a nearby high branch. From its size and distinctive shape she can tell immediately that it is a large raptor and not a little seed-eater such as a goldfinch or canary. That means she will know that “(Goldfinch v Canary) É Goldfinch” is true, because she knows that the antecedent is false. For the same reason she will know that “(Goldfinch v Canary) É Canary” is true. But surely she knows neither that it is a goldfinch rather than a canary nor that it is a  canary rather than a goldfinch, and more than surely not both.

Perhaps then it is a subjunctive (counterfactual) conditional: if it had been a goldfinch or a canary then it would have been a canary. I suppose there conceivably are circumstances where a high-tech procedure could transform a bird embryo into one of a different species. It might be that the most possible such procedure can transform bird embryos into canaries but never into goldfinches. Suppose this is so.

Now suppose that Sophia’s cousin Sonia is an expert ornithologist and knows at a glance what species the blue tit a metre away is. But she also knows about the embryo-transforming procedure so she knows that if it had been a goldfinch or a canary then it would have been a canary. So she knows that it is a goldfinch rather than a canary? Of course not.

The remaining possibility is that it is an indicative conditional. For many philosophers these are just material conditionals, so that won’t do. But for others they are a distinct kind. One way of paraphrasing the resulting interpretation is as “if it turns out to be a goldfinch or a canary, it will turn out to be a goldfinch”. This is still not suitable. Suppose Sonia knows immediately that it is a blue tit but is dealing with an ignorant person who doubts her judgement. She admits that there are other things it could on closer examination — which in fact is not necessary — turn out to be.

And then goldfinch would be more likely to result than canary. So she accepts this particular indicative conditional (if goldfinch or canary then goldfinch.). But she too does not know that it is a goldfinch rather than a canary, because she knows it is a blue tit. (For the differences between kinds of conditionals see Jonathan Bennett A Philosophical a Guide to Conditionals. Oxford: Clarendon Press, 2003.)

Understanding the Contrastive Idiom

These may be problems about formulating the claim rather than about the underlying intention. However I do not think that any version of the idea that all uses of “knows that p rather than q” can be represented as choosing the least wrong from a list of alternatives will work. For one use of the contrastive idiom is to describe limitations in a person’s ability to distinguish possibilities.

Consider four people with varying degrees of red/green colour blindness but with otherwise normal human colour-distinguishing capacities. (Sorry, it has to be four. For the distinctions see https://en.wikipedia.org/wiki/Color_blindness.)

Alyosha has normal r/g vision;

Boris partial capacity (say 70% of normal);

Yekaterina limited capacity (say 40% of normal);

Zenaida no r/g discrimination at all.

They are each presented with one of those familiar colour charts, one in which the most salient figure 3 in vivid green is completed to 8 in dull orange against a background of orangy murkiness. Alyosha knows that it is a 3, so that it is 3 rather than 7 and that it is 3 rather than 8. As a result he knows both that if it is 3 or 8 it is 3 and that if it is 3 or 7 it is 3. Boris can see that it is either 3 or 8; he is not sure which but thinks it is 3.

So he knows that it is 3-or-8 rather than 7 but not that it is 3 rather than a 7 (since for all he knows it might be 8 rather than 3). He also knows that if it is 3-or-8 or 7 then it is 7, and that if it is 3 or 7 then it is 3 (since the antecedent of the conditional rules out 8). Yekaterina thinks that it is 3 or 8, but she has no idea which. She knows that if it is 3 or 7 then it is 3, and that if it is 8 or 7 then it is 8, but nothing more from these possibilities.

Finally Zenaida. She hasn’t a clue about anything needing r/g discrimination and has none of this knowledge. I am assuming that all factors except for r/g discrimination are favourable to knowledge for all four people.

All of these descriptions are natural applications of the “knows rather than” construction in English. They show a fine-grained transition from full contrast to none and in particular that the “if p or q then p” versions appear and disappear at different stages in the transition than the “p rather than q” versions do. That is the point of the contrastive construction, to allow us to make these distinctions.

Contact details: adam.morton@ubc.ca

References

Bennett, Jonathan. A Philosophical a Guide to Conditionals. Oxford: Clarendon Press, 2003.

Tweedt, Chris. “ Solving the Problem of Nearly Convergent Knowledge.” Social Epistemology 32, 219-227: (2018).

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).

Vice Ontology, Quassim Cassam

SERRC —  November 16, 2017 — 1 Comment

Author Information: Quassim Cassam, University of Warwick, UK, q.cassam@warwick.ac.uk

Cassam, Quassim. “Vice Ontology.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 20-27.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3QE

Please refer to:

Image by Francois Meehan via Flickr / Creative Commons

 

One of the frustrations of trying to make headway with the rapidly expanding literature on epistemic vices is the absence of an agreed list of such vices. Vice epistemologists are more than happy to say what makes a character trait, attitude of way of thinking epistemically vicious and most provide examples of epistemic vices or lists of the kind of thing that have in mind. But these lists tend to be a hotchpotch. Different philosophers provide different lists and while there is some overlap there are also some significant variations. Closed-mindedness is a popular favourite but some vices that appear on some lists fail to appear on others. Here, for example, is Linda Zagzebksi’s list:

intellectual pride, negligence, idleness, cowardice, conformity, carelessness, rigidity, prejudice, wishful thinking, closed-mindedness, insensitivity to detail, obtuseness, and lack of thoroughness (1996: 152).

Confronted by a list like this several questions suggest themselves: why do these items make it onto the list and not others? Why not dogmatism or gullibility? Is idleness really an epistemic vice or a vice in a more general sense? Are all the items on the list equally important or are some more important than others? What is the relationship between the listed vices? It isn’t necessarily a criticism of vice epistemologists that they rarely tackle such questions. They are mainly concerned to develop a theoretical account of the notion of an epistemic vice, and individual vices are more often than not only mentioned for illustrative purposes.

An Order for Vice

But as vice epistemologists get down to listing epistemic vices they need to make it clear on what basis included items have been included and excluded items have been excluded. If some epistemic vices are deemed to be subservient to others it needs to be explained why. As Ian James Kidd notes in his valuable contribution, an important but neglected issue for vice epistemology is taxonomy, and this means having a story to tell about the basis on which epistemic vices can reasonably be grouped and ordered.[1]

Kidd rises to this challenge by drawing on the historically influential notion of a capital vice.[2] Capital vices are ‘source vices’ that give rise to other vices. Kidd asks whether there are capital epistemic vices and gives closed-mindedness as a possible example. According to Heather Battaly, whose view Kidd discusses, closed-mindedness is an unwillingness or inability to engage seriously with relevant intellectual options.[3] One way to be closed-minded is to be dogmatic but Battaly suggests that closed-mindedness is the broader notion: one is dogmatic if one is closed-minded with respect to beliefs one already holds but one can be closed-minded without being dogmatic.

For Battaly, closed-mindedness does not require one already to have made up one’s mind since one can also be closed-minded in how one arrives at one’s beliefs. The upshot is that closed-mindedness is the ‘source of dogmatism’ (Kidd 2017: 14). This doesn’t settle the question whether closed-mindedness is a capital epistemic vice if genuine capital vices have more than one sub-vice.

Still, Kidd reads Battaly’s view of the link between closed-mindedness and dogmatism as providing at least some support for viewing the former as a capital epistemic vice. Furthermore, it looks as though the capitality relation is in this case a conceptual relation. It might be a psychological fact that being closed-minded tends to make a person dogmatic but the postulated connection between closed-mindedness and dogmatism looks conceptual: it is built into the concepts of closed-mindedness and dogmatism that being dogmatic is a way of being closed-minded.

To what are analyses of concepts of specific epistemic vices answerable? One might think: to the nature of those vices themselves but then it needs to be explained how talk of the ‘nature’ of epistemic vices is to be understood. In what sense do such vices have a ‘nature’ that analyses of them capture or fail to capture?

Going Back to Locke

This way of formulating the methodological question should resonate with readers of Locke, not least because it represents the question as turning on the ontology of vice. In Locke’s ontology there is a fundamental distinction between substances and modes. Substances, for Locke, are the ultimate subjects of predication and exist independently of us. Gold and horses are Lockean substances, and our complex ideas of substances aren’t just combinations of simple ideas or observable properties.

They are ideas of ‘distinct particular things subsisting by themselves’ with their own underlying nature that explains why they have the observable properties they have (II.xii.6).[4] Since our ideas of substances are ‘intended to be Representations of Substances, as they really are’ they are answerable to the nature of substances as they really are and aren’t guaranteed to be adequate, that is, to do justice to the actual nature of what they are intended to represent (II.xxx.5).

In contrast, our ideas of modes are ideas of qualities or attributes that can only exist as the qualities or attributes of a substance. Modes are dependent existences. Simple modes are combinations of the same simple idea whereas mixed modes combine ideas of several different kinds.[5] So, for example, theft is a mixed mode since the idea of theft is the idea of the concealed change of possession of something without the consent of the proprietor. Locke’s key claim about ideas of modes is that they are ‘voluntary Collections of simple Ideas, which the Mind puts together, without any reference to any real Archetypes’ (II.xxxi.3). It follows that these ideas can’t fail to be adequate since, as Michael Ayers puts it on Locke’s behalf, we form these ideas ‘without the need to refer to reality’ (1991: 57).[6] Take the idea of courage, which Locke regards as a mixed mode:

He that at first put together the Idea of Danger perceived, absence of disorder from Fear, sedate consideration of what was justly to be done, and executing it without that disturbance, or being deterred by the danger of it, had certainly in his Mind that complex Idea made up of that Combination: and intending it to be nothing else, but what it is; nor to have any other simple Ideas, but what it hath, it could not also be but an adequate idea: and laying this up in his Memory, with the name Courage annexed to it, to signifie it to others, and denominate from thence any Action he should observe to agree with it, had thereby a Standard to measure and dominate Actions by, as they agreed to it’ (II.xxxi.3).

When it comes to our ideas of substances it is reality that sets the standard for our ideas. With mixed modes, it is our ideas that set the standard for reality, so that an action is courageous just if it has the features that our idea of courage brings together. Locke doesn’t deny that ideas of mixed modes can be formed by experience and observation. For example, seeing two men wrestle can give one the idea of wrestling. For the most part, however, ideas of modes are the products of invention, of the ‘voluntary putting together of several simple Ideas in our own minds’ (II.xxii.9), without prior observation.

An interesting consequence of what might be described as Locke’s conceptualism about modes is that there is in a sense no external standard by reference to which disputes about what is and is not part of the idea of mixed modes can be settled.[7] Again Locke uses the example of courage to make his point.

Suppose that one person X’s idea of a courageous act includes the idea of ‘sedate consideration’ of ‘what is fittest to be done’ (II.xxx.4). This is the idea of ‘an Action which may exist’ (ibid.) but another person Y has a different idea according to which a courageous action is one that is performed ‘without using one’s Reason or Industry’ (ibid.). Such actions are also possible, and Y’s idea is as ‘real’ as X’s. An action that displays courage by X’s lights might fail to do so by Y’s lights and vice versa but it seems that the only respect in which Y’s idea might count as ‘wrong, imperfect, or inadequate’ (II.xxxi.5) is if Y intends his idea of courage to be the same as X’s. Apart from that, both ideas are equally legitimate and can both be used in the classification of actions.

In fact, this isn’t quite Locke’s view since it omits one important qualification. At one point he argues that:

Mixed Modes and Relations, having no other reality, but what they have in the Minds of Men, there is nothing more required to those kinds of Ideas to make them real, but that they be so framed, that there be the possibility of existing comformable to them. These Ideas being themselves Archetypes, cannot differ from their Archetypes, and so cannot be chimerical, unless any one of them will jumble together in them inconsistent Ideas (II.xxx.4).

On reflection, however, consistency isn’t enough for our complex ideas of mixed modes to be ‘real’. For these ideas not to be ‘fantastical’ they must also ‘have a Conformity to the ordinary signification of the Name’ (II.xxx.4). So it would count against Y’s (or X’s) conception of courage that it doesn’t accord with the ordinary meaning of common usage of words like ‘courage’ or ‘courageous’.

Return to the Present

What is the relevance of Locke’s discussion for the issues that Kidd is concerned with? A natural thought is that epistemic vices like closed-mindedness and dogmatism are, like the idea of courage, mixed modes. As noted previously, there is room for debate about how these epistemic vices are to be understood and how they are related. Starting with dogmatism, here is one account by Roberts and Wood:

A doctrine is a belief about the general character of the world, or some generally important aspect of the world, which bears the weight of many other beliefs. Thus a mother who refuses, in the face of what should be compelling evidence, to give up her belief that her son is innocent of a certain crime, is perhaps stubborn, obstinate, or blinded by her attachment, but she is not on that account dogmatic. By contrast, someone who holds irrationally to some fundamental doctrine, such as the tenets of Marxism or capitalism or Christianity, or some broad historical thesis such as that the Holocaust did not occur, is dogmatic (2007: 194-5).

Battaly sees things slightly differently. On her view, it is possible for a person to be dogmatic even in relation to relatively trivial beliefs or beliefs that aren’t representative of ideologies or doctrines. One can be dogmatic about whether one’s pet is well-behaved or whether one’s son is innocent of a crime. Roberts and Woods’ conception of dogmatism is narrow whereas Battaly’s conception is broad. Who is right?

If being ‘right’ is a matter of conceiving of dogmatism is a way that does justice to its real or true nature then the Lockean conceptualist says that there is no such thing. As a mixed mode, dogmatism is a voluntary collection of simple ideas. Roberts and Wood are free to stipulate that dogmatism has to do with doctrine and Battaly is free to reject this stipulation. Relative to Roberts and Woods’ complex idea of dogmatism the belief that one’s pet is well-behaved is too trivial to be dogmatic. Relative to Battaly’s idea of dogmatism the belief that one’s son is innocent of a certain crime might be dogmatic.

However, the disagreement between the broad and narrow accounts of dogmatism is, on a Lockean reading, a not very deep disagreement between two policies about the use of the term ‘dogmatic’. The most one can say is that the narrow account is closer to ordinary usage, and this might be a case for preferring that account. Beyond that, it’s not clear what is really at issue.

Turning to the relationship between dogmatism and closed-mindedness, Kidd bases his proposal that closed-mindedness is a capital vice of which dogmatism is an offspring on the idea that dogmatism is a sub-class of closed-mindedness: one is dogmatic if one is closed-minded with respect to beliefs one already holds but closed-mindedness doesn’t require one already to have made up one’s mind. Suppose, to borrow Battaly’s example, that P is the proposition that there was no Native American genocide. Even if a person starts out with no prior belief about the truth or falsity of P, their inquiry into its truth or falsity can still be closed-minded. They might, for example, systematically ignore evidence that P and look for evidence against P.

But if this is a how the inquirer behaves then a natural question would be: why is their inquiry into the truth or falsity of P closed-minded in just this way? And the answer that suggests itself is that they are closed-minded in just this way because they already really believe that P. So we do not have here a compelling case of closed-mindedness without the subject already having made up their mind about the topic at hand. The belief that P is implicit in their epistemic conduct and this means that their dogmatism can’t be distinguished from closed-mindedness in quite the way that Kidd recommends. Ordinarily, dogmatism and closed-mindedness aren’t clearly distinguished and there is bound to be an element of stipulation in any proposed way of carving up the territory.

Be Natural – Is There Anything Else?

This is not necessarily an objection to the notion of a capital vice. It is permissible for a vice epistemologist to try to bring some order to the chaos of ordinary thinking and represent one vice as an offshoot of another. It is important to recognize, however, that such proposed regimentations are just that: an attempt to introduce a degree of systematicity into a domain that lacks it. It’s helpful to compare the classification of epistemic vices with the classification of so-called ‘natural modes’. A criticism of Locke’s theory of mixed modes is that it ignores natural modes.[8] Examples of non-natural modes are the ideas of a lie, democracy and property. Lies are lies regardless of their underlying causes.[9]

In contrast, although diseases are modes, ‘the name of a disease will normally be introduced, and then be generally applied, on the basis of repeated experience of a set of symptoms, and on the assumption that on each occurrence they have the same common cause, whether a microbe or an underlying physiological condition’ (Ayers 1991: 91). However, there is a still a sense in which the individuality and boundary conditions of diseases are imposed by us. So, for example, diseases can be classified by bodily region, by organ, by effect, by the nature of the disease process, by aetiology, or on several other bases.[10] There is nothing that compels us to adopt one of these systems of classification rather than another and there is no absolute sense in which one particular system of classification is the ‘right’ one. With diseases and other such modes there is still the relativity to human interests and concerns that marks them out as modes rather than substances.

To make things even more complicated there are some modes that fall somewhere in between the natural and the non-natural. For example, one might take the view that perception and memory are such ‘intermediate’ modes. Perception is mechanism-dependent in the sense that it isn’t really perception unless some underlying physiological mechanism is involved. Plainly, however, no specific mechanism need be involved in all cases of perception. Human perception and dolphin perception both involve and require the operation of physiological mechanisms but the precise mechanisms will no doubt be very different in the two cases. The necessity of some mechanism is a respect in which intermediate modes are ‘natural’. The fact that no particular mechanism is required is a respect in which intermediate modes are akin to non-natural modes.[11]

In these terms, are epistemic vices natural, non-natural or intermediate modes? The discussion so far, with its emphasis on choice and stipulation in the classification of epistemic vices, might be thought to imply that such vices are non-natural but there is room for debate about this. Just as all manifestations of a particular disease are assumed to have a common cause at the level of physiology so it might be argued that the identification and attribution of epistemic vices is based on the assumption of a common psychological cause or mechanism. Epistemic vices are in this respect, and perhaps others too, like diseases.

Closed-mindedness is a case in point. There is the view that being closed-minded isn’t just a matter of being unwilling or unable to engage seriously with relevant intellectual options. A closed-minded person also has to have what Kruglanski calls a high need for ‘closure’, that is a low tolerance for confusion and ambiguity.[12] It might be argued that this is the distinctive psychological component of closed-mindedness that causally explains the various cognitive dispositions with which the trait is closely associated. In this case the psychological component is a motive. Would this justify the classification of closed-mindedness as a natural mode, an epistemic vice whose attribution in different cases is based on the assumption of a common motivational core that functions as a common psychological cause?

If so, then dogmatism is different from closed-mindedness in precisely this respect. What motivates a dogmatic commitment to a political doctrine might be a psychological need for closure but other motives are also possible. For example, a person’s dogmatism about a particular political doctrine might be a reflection of the ways in which a commitment to it is part of their identity, their sense of who they are.

Whether or not this is the right account of dogmatism it is doubtful that the motivational account applies epistemic vices generally. There are epistemic vices like stupidity, understood as foolishness rather than lack of intelligence, which lack an obvious motivational component. People aren’t motivated to be stupid in the way that they are supposedly motivated to be closed-minded. And even in the latter case one might wonder whether the desire for closure is strictly necessary or, even if it is, whether it is an independently identifiable component of closed-mindedness. One might count as having a high need for closure because one is closed-minded. Here, the attribution of the motive follows rather than underpins the attribution of the trait.

What Is a Vice of Knowledge?

So one should be careful about representing epistemic vices as natural modes. There is still the option of representing them as intermediate modes but it’s not clear whether epistemic vices are mechanism-dependent in anything like the way that perception is mechanism-dependent. This issue merits further discussion. In the meantime, the one thing that seems reasonably clear is that epistemic vices are epistemically harmful and blameworthy or otherwise reprehensible.[13] The sense in which they are epistemically harmful is that they systematically obstruct the gaining, keeping or sharing of knowledge. However, there is considerable room for maneuver when it comes to defining the individual character traits, attitudes or ways of thinking that are epistemically harmful.

Where does this leave the notion of a capital vice and the project of identifying some epistemic vices as capital vices and others as offspring vices? To the extent that ordinary ways of talking about vices like closed-mindedness and dogmatic are imprecise there is a lot to be said for the project of establishing clear lines of demarcation and relations of priority between different epistemic vices.

However, any such project needs to be informed by a proper conception of what epistemic vices are, ontologically speaking, and a well-founded view as to whether the project consists in the discovery of real distinctions that are there anyway or rather in the imposition of boundaries that only exist in virtue of our recognition of them. To think of epistemic vices as modes is to be committed to an ‘impositionist’ reading of the capital vices project. The point at which this project starts to look suspect is the point at which it is conceived of as fundamentally a project of discovery.[14] The discovery in this domain is that there is, in a certain sense, nothing to discover.[15]

Contact details: q.cassam@warwick.ac.uk

References

Ayers, M. R. Locke, Volume 2: Ontology. London: Routledge, 1991.

Battaly, H. “Closed-Mindedness and Intellectual Vice,” Keynote Address delivered at the Harms and Wrongs in Epistemic Practice conference, University of Sheffield, 4 July 2017.

Cassam, Q. “Parfit on Persons.” Proceedings of the Aristotelian Society 93 (1993): 17-37.

Cassam, Q. “Vice Epistemology.” The Monist, 88 (2016): 159-80.

Kidd, I., “Capital Epistemic Vices.” Social Epistemology Review and Reply Collective 6 (2017): 11-17.

Kruglanski, A. W. The Psychology of Closed-Mindedness. New York: Psychology Press, 2004.

Locke, J. An Essay Concerning Human Understanding. Edited by P. H. Nidditch. Oxford: Oxford University Press, 1975.

Perry, D. L. “Locke on Mixed Modes, Relations, and Knowledge.” Journal of the History of Philosophy 5 (1967): 219-35.

Robbins, S. L, Robbins, J. H. & Scarpelli, D. G. “Classification of Diseases.” Retrieved from https://www.britannica.com/science/human-disease/Classifications-of-diseases, 2017.

Roberts, R. C. & Wood, W. J. Intellectual Virtues: An Essay in Regulative Epistemology. Oxford: Oxford University Press, 2007.

Zagzebski, L. Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge. Cambridge: Cambridge University Press, 1996.

[1] ‘Vice epistemology’, as I understand it, is the philosophical study of the nature, identity and significance of epistemic vices. See Cassam 2016. ‘Vice epistemologists’ are philosophers who work on, or in, vice epistemology. Notable vice epistemologists include Heather Battaly, Ian Kidd and Alessandra Tanesini.

[2] Kidd 2017.

[3] Battaly 2017.

[4] All references in this form are to a book, chapter and section of Locke 1975, which was originally published in 1689.

[5] Locke’s examples of mixed modes include beauty, theft, obligation, drunkenness, a lie, hypocrisy, sacrilege, murder, appeal, triumph, wrestling, fencing, boldness, habit, testiness, running, speaking, revenge, gratitude, polygamy, justice, liberality, and courage. This list is from Perry 1967.

[6] Locke illustrates the arbitrariness of mixed modes by noting that we have the complex idea of patricide but no special idea for the killing of a son or a sheep.

[7] There is more on ‘conceptualism’ in Cassam 1993.

[8] For a helpful discussion of this issue see Ayers 1991, chapter 8. My understanding of Locke is heavily indebted to Ayers’ commentary.

[9] See Ayers 1991: 97.

[10] For more on the classification of diseases see Robbins, Robbins and Scarpelli 2017.

[11] This paragraph is a summary of the discussion of intermediate modes in Ayers 1991: 96-7.

[12] Kruglanski 2004: 6-7.

[13] This is the essence of what I call ‘obstructivism’ about epistemic vice, the view that epistemic vices are blameworthy or otherwise reprehensible character traits, attitudes or ways of thinking that systematically obstruct the gaining, keeping or sharing of knowledge. For obstructivism, epistemic vices aren’t delineated by their motives.

[14] I’m not suggesting that this is how Kidd conceives of the project. His approach is more in keeping with impositionism.

[15] Thanks to Heather Battaly and Ian James Kidd for helpful comments.

Author Information: Tommaso Bertolotti, University of Pavia, Italy, bertolotti@unipv.it

Bertolotti, Tommaso “Science-Like Gossip, or Gossip-Like Science?” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 15-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3D0

Please refer to:

Image credit: Megan, via flickr

Richard Feynman is credited for having once quipped that “philosophy of science is as useful to scientists as ornithology is useful to birds.” Feynman, a Nobel prize laureate, got called the smartest man on Earth. Hearing this, his mother said, “Then I don’t want to think about the others.”

Ornithologists normally won’t teach birds how to fly. But they can tell the rest of us people how to protect birds, how to make our activities less harmful with their respect. They can teach us how to tell a bat from an owl, a butterfly from a hummingbird, and also how to determine whether what is soaring above us is a hawk or a drone. Ornithologists, teaming up with engineers and medical scientist, can directly affect birds by recreating broken beaks, patching up wings, or even teaching them to fly. All of those things couldn’t be achieved without the knowledge carefully gathered by ornithologists.

Today, the analogy between epistemology and ornithology is all the more actual. Science is under attack. Research is being cut most everywhere in the North-Western Hemisphere. Where it is not heavily cut, it is arguably managed by leveraging the Matthew Effect[1] in ways that hardly encourage, for instance, the diversification of Science. Science needs philosophy of science. Philosophy, especially in its epistemological avatar, needs to protect the formidable adversary it was once challenged by. A stern claim that radically collaborative science shows no cause for epistemic alarm seems a quick way of washing one’s hands, thus endorsing Feynman’s mockery.

The fact that there is no epistemic alarm, Søren Klausen (2017) concedes, does not imply saying there is no cause for alarm. There can be pragmatic, social, political causes for alarm (although the author seems to take them quite lightly). But what if the pragmatic, social, political causes for alarm are actual, and this in turn has alarming epistemic effect?

Alarmed By Radically Collaborative Science?

Let’s consider gun control: we can say that it’s pointless to engage a moral discourse about weapons because weapons have no volition, and ultimately, it’s people as moral subjects that kill through them. Would it be reasonable to hold that guns don’t cause moral alarm on such bases? In spite of their being morally neutral, if certain things cause morally alarming outcomes, then they give reason to be morally alarmed.

Klausen, against the so-called Georgetown Alarmists, maintains that there is no reason to be alarmed about radically collaborative science. The Georgetown Alarmists, conversely, present an epistemically alarmed view and they issue an epistemologically negative judgement on radically collaborative science. In my view, being alarmed does not entail issuing a negative judgement. If I hear a huge ruckus coming from the ground floor at night, I have the right to be alarmed. It would be reasonable that I go down and check. I am not necessarily justified in going down and blindly fire a full round in the dark. I am epistemically alarmed about radically collaborative science, if only for the pragmatic consequences that seem likely to affect the epistemic procedures of Science. Considering that there is reason to be epistemically alarmed amounts to say “Hey, there is something strange going on there. Let’s pull it over and have a closer look.”

By this, I’m not saying that we must condemn radically collaborative science on epistemic grounds, but we might be at the same time a little more careful before green-lighting it.

Let me start with a minor yet compelling reason why the non-epistemic effects of radically collaborative science might not be neutral from an epistemological point of view and hence cause some kind of epistemic alarm: citation indexes. Citation indexes are the mixed blessing of contemporary academia. Academics care about their h-index, their i-10 index, their Scopus, their Google Scholar index. Like it or not, right or wrong, these indexes rule the academic job market through the world. Even humanities adhered to it, producing indexes that are quite ridiculous compared to their scientific counterparts. How will radically collaborative science impact the indexing? How is this going to affect the job market? This is clearly not an epistemic trait, but since what is at stake are the next generations of scientists and academics, the next prime producers of episteme, this issue is somehow allowed to trigger some epistemic alert.

Let’s now consider some properly epistemic issues: one of the Georgetown Alarmists’ chief reasons for epistemic alarm is that scientific claims are not fully accountable in the framework of radical collaboration. In their view, an epistemically relevant feature of scientific knowledge is that it is accountable. Such accountability has been traditionally enforced in scientific practice, for instance in publications: if Black & White make a claim in a paper, if that claim is proved true or wrong they are held accountable for it. And if they are wrong because of some results they have received from Green and Brown’s paper, Green and Brown will be held epistemically accountable. Such epistemic accountability has tacitly regulated the publishing heuristic according to which it is better to publish fewer accurate papers than many shakier ones.[2]

Science has a kind of a double standard concerning failures and drawbacks: in the finest Popperian spirit, falsifications are sought for, any truth is provisional and errors illuminate the correct path. Science thrives through its mistakes, but scientists don’t. If you are the leader of a important research program and you got it wrong, you won’t be charged nor prosecuted but chances are you won’t get hired again. As said in the Gospel of Matthew (18:7), “[…] it is inevitable that stumbling blocks come; but woe to that man through whom the stumbling block comes”.

A radically collaborative paper may have dozens of authors, some extreme cases have hundreds, including honorary authorship attributions. The authors might even count the programmers who developed a certain data mining or machine learning software, and such software might be credited with a certain authorship, too. What kind of accountability is there left? Claims in the paper are shared between dozens or hundreds of subjects. It doesn’t make sense anymore to look for the accountable origin of this and that claim. This as far as authorship is concerned. What about the accountability relating to references to similarly collaborative outputs?

Epistemic Accountability and Gossip

Klausen interestingly argues that sticking to epistemic accountability as we’ve always known it is somewhat of an old-fashioned feat, inasmuch as it might be desirable, it may make things easier, but it is not an epistemic must for the constitution of scientific knowledge. I say, “fine”. I don’t have any argument in favor of strong accountability. Still, if I accept with Klausen that accountability is a preferable feature that I can actually make without, and not a must, something makes me wonder: How can I tell science from gossip? Let me explain why.

The 1990s marked the turning point in gossip studies, especially as far a new ethical evaluation of gossip was concerned. The edited book Good Gossip is one of the best examples of this new view. A feminist Peircean scholar, Maryann Ayim, contributed to this book with the essay “Knowledge through the Grapevine: Gossip as Inquiry”, in which she pointed out the epistemological resemblance between scientific inquiry and gossipy investigations in a social group.

Gossip’s model captures several aspects of Peirce’s notion of a community of investigators. Describing what he sees as the causes of “the triumph of modern science,” Peirce speaks specifically of the scientists’

unreserved discussion with one another … each being fully informed about the work of his neighbour, and availing himself of that neighbour’s results; and thus in storming the stronghold of truth one mounts upon the shoulders of another who has to ordinary apprehension failed, but has in truth succeeded by virtue of the lessons of his failure. This is the veritable essence of science” (CP7.51).

If Peirce is right that the unreserved discussions with one another are a cornerstone in the triumph of modern science, then gossip, by its very nature, would appear to be an ideal vehicle for the acquisition of knowledge. Gossips certainly avail themselves of their neighbours’ results, discussing unreservedly and sharing results constitute the very essence of gossip.[3]

Elaborating on the epistemology of gossip together with Lorenzo Magnani,[4] we unfolded Ayim’s seminal insight, showing how the inquiries of gossip can be modeled as abductions: abduction is the prime inferential structure to describe hypothetical reasoning, namely science. One cannot work on gossip without working on rumor, and we took from social epistemologist David Coady this intriguing distinction:

This is one difference between rumor and another form of communication with which it is often confused, gossip. Gossip may well be first-hand. By contrast, no first-hand account of an event can be a rumor, though it may later become.[5]

It can be said that what Coady is referring to is the accountability of gossip. Gossip is, in principle, accountable because gossip traces back to some eye-witnessed even and its inferential elaboration by a specific group of people with their specific background and so on (Bertolotti & Magnani, 2014). Why then do we have a problem with gossip, usually relating to the fact that gossip cannot be trusted because no-one is accountable for it? Because there are two dynamics at play in gossip: the actual dynamic between peers, and the “projected”, subject-less dynamic concerning the group-level. The projected subject instantiated by sentences such as “Oh come on, everybody knows that Joe’s been cheating on his wife for ages!”. When we say things such as “everybody knows” referring to our social group, we mean that those small idle exchanges that are the bricks of gossip leveled up and became as true rumors for the whole group. At group level, a true rumor is a fact. Gossip that makes it all the way up to become common knowledge entertains an ambiguous relation with accountability, akin to the one described by Klausen concerning radically collaborative science. The collaborators’ contribution is sublimated into the radically collaborative accountability, which is indeed different from the accountability in traditional, less collaborative science.

By these considerations, I am not smuggling in a bad company fallacy, suggesting that if radically collaborative science is like gossip then radically collaborative science is bad. I don’t have strong opinion on radically collaborative science, I am careful. I am neither as hostile as the Georgetown Alarmists, nor as permissive as Klausen. I think that scientists might not have any choice but to go with the flow, but philosophers of science might accept the epistemic alarm, unravel it, and then decide whether such alarm was justified or not. There is no need to endorse it too quickly.

If we think of gossip in the evolution of human cognition, gossip is one of the most ancient form of social cognition and social communication, to the point that some argue that language as an adaptation was selected for its capacity to afford gossip.[6]

Gossip is an extremely sophisticated tool for collective inferences, and as such it might have permitted the emergence of hypothesis taking into account multiple causality.[7] At the same time, human beings needed to go beyond gossip and find more rigorous methods to produce and secure knowledge, keeping the latter’s inferential power but leveraging its accuracy and predictive power at the same it. In a speculative gaze, it can be argued that both justice and science developed by and for making people accountable for their claims. Sure, accountability has had its shortcomings (people have been forced to pay with their freedom or their life for some erroneous claim) but the other face of this coin is recognition. Recognition is not an epistemic value, but it is an epistemic drive.

We struggled our way out of gossip by making hypotheses and predictions accountable, and we got to science. Science shaped its own way of producing knowledge, which basically amounts to the world as we know it. Now, if we feel compelled to sacrifice traditional accountability at the altar of our challenges and our current means, it is maybe the epistemically right thing to do, but there is need to be epistemically alert, to think it through, and so help us God.

References

Ayim, Maryann. “Knowledge Through the Grapevine: Gossip as Inquiry.” In Good Gossip edited by Robert F. Goodman & Aaron Ben-Ze’ev, 85–99. Lawrence, KS: University Press of Kansas, 1994.

Bertolotti, Tommaso and Magnani, Lorenzo. “An Epistemological Analysis of Gossip and Gossip-Based Knowledge.” Synthese 191, no. 17 (2014): 4037–4067.

Bertolotti, Tommaso and Lorenzo Magnani. “Gossip as a Model of Inference to Composite Hypotheses.” Pragmatics & Cognition, 22 (2016): 309–324.

Coady, David. What to Believe Now: Applying Epistemology to Contemporary Issues. New York: Blackwell, 2012.

Dunbar, Robin. “Gossip in an Evolutionary Perspective.” Review of General Psychology, 8 (2004): 100–110.

Klausen, Søren Harnow. “No Cause for Epistemic Alarm: Radically Collaborative Science, Knowledge and Authorship.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 38-61.

Merton, Robert K. “The Matthew Effect in Science.” Science 159 (1968): 56-63.

Peirce, Charles Sanders. Collected Papers of Charles Sanders Peirce, Vol 7. Edited by Arthur W. Burks, CP7.51. Cambridge, MA: Harvard University Press, 1958.

[1] Robert K. Merton “The Matthew Effect in Science” (1968). The expression was coined by sociologist Robert Merton and it refers to a snowball-like, cumulative advantage of the good at stake. It is inspired by the ominous sentence in the Parable of the Talents in Matthew’s Gospel 25:29: “For to everyone who has will more be given, and he will have an abundance. But from the one who has not, even what he has will be taken away.” It is used to indicate how rich people get richer, famous people get more famous, and highly cited papers get even more citations.

[2] In certain disciplinary fields, characterized by lesser degrees of collaboration such as the humanities, the accountability principle is sometimes led to paroxysm, for instance when “the contribution of each author to the paper is to be clearly described.”

[3] Ayim, “Knowledge Through the Grapevine: Gossip as Inquiry,” 87.

[4] Bertolotti & Magnani, “An Epistemological Analysis of Gossip and Gossip-Based Knowledge.”

[5] Coady, “What to Believe Now,” 87.

[6] Dunbar, “Gossip in an Evolutionary Perspective.”

[7] Bertolotti and Magnani, “Gossip as a Model of Inference to Composite Hypotheses.”