Archives For epistemic politics

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45p

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons

 

In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details: rsassowe@uccs.edu

References

Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Luca Tateo, Aalborg University & Federal University of Bahia, luca@hum.aau.dk.

Tateo, Luca. “Ethics, Cogenetic Logic, and the Foundation of Meaning.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44i

Mural entitled “Paseo de Humanidad” on the Mexican side of the US border wall in the city of Heroica Nogales, in Sonora. Art by Alberto Morackis, Alfred Quiróz and Guadalupe Serrano.
Image by Jonathan McIntosh, via Flickr / Creative Commons

 

This essay is in reply to: Miika Vähämaa (2018) Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

In his interesting essay, Vähämaa (2018) discusses two issues that I find particularly relevant. The first one concerns the foundation of meaning in language, which in the era of connectivism (Siemens, 2005) and post-truth (Keyes, 2004) becomes problematic. The second issue is the appreciation of epistemic virtues in a collective context: how the group can enhance the epistemic skill of the individual?

I will try to explain why these problems are relevant and why it is worth developing Vähämaa’s (2018) reflection in the specific direction of group and person as complementary epistemic and ethic agents (Fricker, 2007). First, I will discuss the foundations of meaning in different theories of language. Then, I will discuss the problems related to the stability and liminality of meaning in the society of “popularity”. Finally I will propose the idea that the range of contemporary epistemic virtues should be integrated by an ethical grounding of meaning and a co-genetic foundation of meaning.

The Foundation of Meaning in Language

The theories about the origins of human language can be grouped in four main categories, based on the elements characterizing the ontogenesis and glottogenesis.

Sociogenesis Hypothesis (SH): it is the idea that language is a conventional product, that historically originates from coordinated social activities and it is ontogenetically internalized through individual participation to social interactions. The characteristic authors in SH are Wundt, Wittgenstein and Vygotsky (2012).

Praxogenesis Hypothesis (PH): it is the idea that language historically originates from praxis and coordinated actions. Ontogenetically, the language emerges from senso-motory coordination (e.g. gaze coordination). It is for instance the position of Mead, the idea of linguistic primes in Smedslund (Vähämaa, 2018) and the language as action theory of Austin (1975).

Phylogenesis Hypothesis (PhH): it is the idea that humans have been provided by evolution with an innate “language device”, emerging from the evolutionary preference for forming social groups of hunters and collective long-duration spring care (Bouchard, 2013). Ontogenetically, language predisposition is wired in the brain and develops in the maturation in social groups. This position is represented by evolutionary psychology and by innatism such as Chomsky’s linguistics.

Structure Hypothesis (StH): it is the idea that human language is a more or less logic system, in which the elements are determined by reciprocal systemic relationships, partly conventional and partly ontic (Thao, 2012). This hypothesis is not really concerned with ontogenesis, rather with formal features of symbolic systems of distinctions. It is for instance the classical idea of Saussure and of the structuralists like Derrida.

According to Vähämaa (2018), every theory of meaning has to deal today with the problem of a terrific change in the way common sense knowledge is produced, circulated and modified in collective activities. Meaning needs some stability in order to be of collective utility. Moreover, meaning needs some validation to become stable.

The PhH solves this problem with a simple idea: if humans have survived and evolved, their evolutionary strategy about meaning is successful. In a natural “hostile” environment, our ancestors must have find the way to communicate in such a way that a danger would be understood in the same way by all the group members and under different conditions, including when the danger is not actually present, like in bonfire tales or myths.

The PhH becomes problematic when we consider the post-truth era. What would be the evolutionary advantage to deconstruct the environmental foundations of meaning, even in a virtual environment? For instance, what would be the evolutionary advantage of the common sense belief that global warming is not a reality, considered that this false belief could bring mankind to the extinction?

StH leads to the view of meaning as a configuration of formal conditions. Thus, stability is guaranteed by structural relations of the linguistic system, rather than by the contribution of groups or individuals as epistemic agents. StH cannot account for the rapidity and liminality of meaning that Vähämaa (2018) attributes to common sense nowadays. SH and PH share the idea that meaning emerges from what people do together, and that stability is both the condition and the product of the fact that we establish contexts of meaningful actions, ways of doing things in a habitual way.

The problem is today the fact that our accelerated Western capitalistic societies have multiplied the ways of doing and the number of groups in society, decoupling the habitual from the common sense meaning. New habits, new words, personal actions and meanings are built, disseminated and destroyed in short time. So, if “Our lives, with regard to language and knowledge, are fundamentally bound to social groups” (Vähämaa, 2018, p. 169) what does it happen to language and to knowledge when social groups multiply, segregate and disappear in a short time?

From Common Sense to the Bubble

The grounding of meaning in the group as epistemic agent has received a serious stroke in the era of connectivism and post-truth. The idea of connectivism is that knowledge is distributed among the different agents of a collective network (Siemens, 2005). Knowledge does not reside into the “mind” or into a “memory”, but is rather produced in bits and pieces, that the epistemic agent is required to search, and to assemble through the contribution of the collective effort of the group’s members.

Thus, depending on the configuration of the network, different information will be connected, and different pictures of the world will emerge. The meaning of the words will be different if, for instance, the network of information is aggregated by different groups in combination with, for instance, specific algorithms. The configuration of groups, mediated by social media, as in the case of contemporary politics (Lewandowsky, Ecker & Cook, 2017), leads to the reproduction of “bubbles” of people that share the very same views, and are exposed to the very same opinions, selected by an algorithm that will show only the content compliant with their previous content preferences.

The result is that the group loses a great deal of its epistemic capability, which Vähämaa (2018) suggests as a foundation of meaning. The meaning of words that will be preferred in this kind of epistemic bubble is the result of two operations of selection that are based on popularity. First, the meaning will be aggregated by consensual agents, rather than dialectic ones. Meaning will always convergent rather than controversial.

Second, between alternative meanings, the most “popular” will be chosen, rather than the most reliable. The epistemic bubble of connectivism originates from a misunderstanding. The idea is that a collectivity has more epistemic force than the individual alone, to the extent that any belief is scrutinized democratically and that if every agent can contribute with its own bit, the knowledge will be more reliable, because it is the result of a constant and massive peer-review. Unfortunately, the events show us a different picture.

Post-truth is actually a massive action of epistemic injustice (Fricker, 2007), to the extent that the reliability of the other as epistemic agent is based on criteria of similarity, rather than on dialectic. One is reliable as long as it is located within my own bubble. Everything outside is “fake news”. The algorithmic selection of information contributes to reinforce the polarization. Thus, no hybridization becomes possible, the common sense (Vähämaa, 2018) is reduced to the common bubble. How can the epistemic community still be a source of meaning in the connectivist era?

Meaning and Common Sense

SH and PH about language point to a very important historical source: the philosopher Giambattista Vico (Danesi, 1993; Tateo, 2015). Vico can be considered the scholar of the common sense and the imagination (Tateo, 2015). Knowledge is built as product of human experience and crystallized into the language of a given civilization. Civilization is the set of interpretations and solutions that different groups have found to respond to the common existential events, such as birth, death, mating, natural phenomena, etc.

According to Vico, all the human beings share a fate of mortal existence and rely on each other to get along. This is the notion of common sense: the profound sense of humanity that we all share and that constitutes the ground for human ethical choices, wisdom and collective living. Humans rely on imagination, before reason, to project themselves into others and into the world, in order to understand them both. Imagination is the first step towards the understanding of the Otherness.

When humans loose contact with this sensus communis, the shared sense of humanity, and start building their meaning on egoism or on pure rationality, civilizations then slip into barbarism. Imagination gives thus access to the intersubjectivity, the capability of feeling the other, while common sense constitutes the wisdom of developing ethical beliefs that will not harm the other. Vico ideas are echoed and made present by the critical theory:

“We have no doubt (…) that freedom in society is inseparable from enlightenment thinking. We believe we have perceived with equal clarity, however, that the very concept of that thinking (…) already contains the germ of the regression which is taking place everywhere today. If enlightenment does not [engage in] reflection on this regressive moment, it seals its own fate (…) In the mysterious willingness of the technologically educated masses to fall under the spell of any despotism, in its self-destructive affinity to nationalist paranoia (…) the weakness of contemporary theoretical understanding is evident.” (Horkheimer & Adorno, 2002, xvi)

Common sense is the basis for the wisdom, that allows to question the foundational nature of the bubble. It is the basis to understand that every meaning is not only defined in a positive way, but is also defined by its complementary opposite (Tateo, 2016).

When one uses the semantic prime “we” (Vähämaa, 2018), one immediately produces a system of meaning that implies the existence of a “non-we”, one is producing otherness. In return, the meaning of “we” can only be clearly defined through the clarification of who is “non-we”. Meaning is always cogenetic (Tateo, 2015). Without the capability to understand that by saying “we” people construct a cogenetic complex of meaning, the group is reduced to a self confirming, self reinforcing collective, in which the sense of being a valid epistemic agent is actually faked, because it is nothing but an act of epistemic arrogance.

How we can solve the problem of the epistemic bubble and give to the relationship between group and person a real epistemic value? How we can overcome the dangerous overlapping between sense of being functional in the group and false beliefs based on popularity?

Complementarity Between Meaning and Sense

My idea is that we must look in that complex space between the “meaning”, understood as a collectively shared complex of socially constructed significations, and the “sense”, understood as the very personal elaboration of meaning which is based on the person’s uniqueness (Vygotsky, 2012; Wertsck, 2000). Meaning and sense feed into each other, like common sense and imagination. Imagination is the psychic function that enables the person to feel into the other, and thus to establish the ethical and affective ground for the common sense wisdom. It is the empathic movement on which Kant will later on look for a logic foundation.

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” (Kant 1993, p. 36. 4:429)

I would further claim that maybe they feed into each other: the logic foundation is made possible by the synthetic power of empathic imagination. Meaning and sense feed into each other. On the one hand, the collective is the origin of internalized psychic activities (SH), and thus the basis for the sense elaborated about one’s own unique life experience. On the other hand, the personal sense constitutes the basis for the externalization of the meaning into the arena of the collective activities, constantly innovating the meaning of the words.

So, personal sense can be a strong antidote to the prevailing force of the meaning produced for instance in the epistemic bubble. My sense of what is “ought”, “empathic”, “human” and “ethic”, in other words my wisdom, can help me to develop a critical stance towards meanings that are build in a self-feeding uncritical way.

Can the dialectic, complementary and cogenetic relationship between sense and meaning become the ground for a better epistemic performance, and for an appreciation of the liminal meaning produced in contemporary societies? In the last section, I will try to provide arguments in favor of this idea.

Ethical Grounding of Meaning

If connectivistic and post-truth societies produce meanings that are based on popularity check, rather than on epistemic appreciation, we risk to have a situation in which any belief is the contingent result of a collective epistemic agent which replicates its patterns into bubbles. One will just listen to messages that confirm her own preferences and belief and reject the different ones as unreliable. Inside the bubble there is no way to check the meaning, because the meaning is not cogenetic, it is consensual.

For instance, if I read and share a post on social media, claiming that migrants are the main criminal population, despite my initial position toward the news, there is the possibility that within my group I will start to see only posts confirming the initial fact. The fact can be proven wrong, for instance by the press, but the belief will be hard to change, as the meaning of “migrant” in my bubble is likely to continue being that of “criminal”. The collectivity will share an epistemically unjust position, to the extent that it will attribute a lessened epistemic capability to those who are not part of the group itself. How can one avoid that the group is scaffolding the “bad” epistemic skills, rather than empowering the individual (Vähämaa, 2018)?

The solution I propose is to develop an epistemic virtue based on two main principles: the ethical grounding of meaning and the cogenetic logic. The ethical grounding of meaning is directly related to the articulation between common sense and wisdom in the sense of Vico (Tateo, 2015). In a post-truth world in which we cannot appreciate the epistemic foundation of meaning, we must rely on a different epistemic virtue in order to become critical toward messages. Ethical grounding, based on the personal sense of humanity, is not of course epistemic test of reliability, but it is an alarm bell to become legitimately suspicious toward meanings. The second element of the new epistemic virtue is cogenetic logic (Tateo, 2016).

Meaning is grounded in the building of every belief as a complementary system between “A” and “non-A”. This implies that any meaning is constructed through the relationship with its complementary opposite. The truth emerges in a double dialectic movement (Silva Filho, 2014): through Socratic dialogue and through cogenetic logic. In conclusion, let me try to provide a practical example of this epistemic virtue.

The way to start to discriminate potentially fake news or the tendentious interpretations of facts would be essentially based on an ethic foundation. As in Vico’s wisdom of common sense, I would base my epistemic scrutiny on the imaginative work that allows me to access the other and on the cogenetic logic that assumes every meaning is defined by its relationship with the opposite.

Let’s imagine that we are exposed to a post on social media, in which someone states that a caravan of migrants, which is travelling from Honduras across Central America toward the USA border, is actually made of criminals sent by hostile foreign governments to destabilize the country right before elections. The same post claims that it is a conspiracy and that all the press coverage is fake news.

Finally the post presents some “debunking” pictures showing some athletic young Latino men, with their faces covered by scarves, to demonstrate that the caravan is not made by families with children, but is made by “soldiers” in good shape and who don’t look poor and desperate as the “mainstream” media claim. I do not know whether such a post has ever been made, but I just assembled elements of very common discourses circulating in the social media.

The task is no to assess the nature of this message, its meaning and its reliability. I could rely on the group as a ground for assessing statements, to scrutinize their truth and justification. However, due to the “bubble” effect, I may fall into a simple tautological confirmation, due to the configuration of the network of my relations. I would probably find only posts confirming the statements and delegitimizing the opposite positions. In this case, the fact that the group will empower my epistemic confidence is a very dangerous element.

I could limit my search for alternative positions to establish a dialogue. However, I could not be able, alone, to find information that can help me to assess the statement with respect to its degree of bias. How can I exert my skepticism in a context of post-truth? I propose some initial epistemic moves, based on a common sense approach to the meaning-making.

1) I must be skeptical of every message which uses a violent, aggressive, discriminatory language, and that such kind of message is “fake” by default.

2) I must be skeptical of every message that treats as criminals or is against whole social groups, even on the basis of real isolated events, because this interpretation is biased by default.

3) I must be skeptical of every message that attacks or targets persons for their characteristics rather than discussing ideas or behaviors.

Appreciating the hypothetical post about the caravan by the three rules above mentioned, one will immediately see that it violates all of them. Thus, no matter what is the information collected by my epistemic bubble, I have justified reasons to be skeptical towards it. The foundation of the meaning of the message will not be neither in the group nor in the person. It will be based on the ethical position of common sense’s wisdom.

Contact details: luca@hum.aau.dk

References

Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press.

Bouchard, D. (2013). The nature and origin of language. Oxford: Oxford University Press.

Danesi, M. (1993). Vico, metaphor, and the origin of language. Bloomington: Indiana University Press.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford University Press.

Kant, I. (1993) [1785]. Grounding for the Metaphysics of Morals. Translated by Ellington, James W. (3rd ed.). Indianapolis and Cambridge: Hackett.

Keyes, R. (2004). The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: St. Martin’s.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1) http://www.itdl.org/Journal/Jan_05/article01.htm

Silva Filho, W. J. (2014). Davidson: Dialog, dialectic, interpretation. Utopía y praxis latinoamericana, 7(19).

Tateo, L. (2015). Giambattista Vico and the psychological imagination. Culture & Psychology, 21(2), 145-161.

Tateo, L. (2016). Toward a cogenetic cultural psychology. Culture & Psychology, 22(3), 433-447.

Thao, T. D. (2012). Investigations into the origin of language and consciousness. New York: Springer.

Vähämaa, M. (2018). Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

Vygotsky, L. S. (2012). Thought and language. Cambridge, MA: MIT press.

Wertsck, J. V. (2000). Vygotsky’s Two Minds on the Nature of Meaning. In C. D. Lee & P. Smagorinsky (eds), Vygotskian perspectives on literacy research: Constructing meaning through collaborative inquiry (pp. 19-30). Cambridge: Cambridge University Press.

Author Information: Jeff Kochan, University of Konstanz, jwkochan@gmail.com.

Kochan, Jeff. “Decolonising Science in Canada: A Work in Progress.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 42-47.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-43i

A Mi’kmaw man and woman in ceremonial clothing.
Image by Shawn Harquail via Flickr / Creative Commons

 

This essay is in reply to:

Wills, Bernard (2018). ‘Weak Scientism: The Prosecution Rests.’ Social Epistemology Review & Reply Collective 7(10): 31-36.

In a recent debate about scientism in the SERRC pages, Bernard Wills challenges the alleged ‘ideological innocence’ of scientism by introducing a poignant example from his own teaching experience on the Grenfell Campus of Memorial University, in Corner Brook, Newfoundland (Wills 2018: 33).

Note that Newfoundland, among its many attractions, claims a UNESCO World Heritage site called L’Anse aux Meadows. Dating back about 1000 years, L’Anse aux Meadows is widely agreed to hold archaeological evidence for the earliest encounters between Europeans and North American Indigenous peoples.

Southwest Newfoundland is a part of Mi’kma’ki, the traditional territory of the Mi’kmaq. This territory also includes Nova Scotia, Prince Edward Island, and parts of New Brunswick, Québec, and Maine. Among North America’s Indigenous peoples, the Mi’kmaq can readily claim to have experienced some of the earliest contact with European culture.

Creeping Colonialism in Science

Let us now turn to Wills’s example. A significant number of students on the Grenfell Campus are Mi’kmaq. These students have sensitised Wills to the fact that science has been used by the Canadian state as an instrument for colonial oppression. By cloaking colonialism in the claim that science is a neutral, universal standard by which to judge the validity of all knowledge claims, state scientism systematically undermines the epistemic authority of ancient Mi’kmaq rights and practices.

Wills argues, ‘[t]he fact that Indigenous knowledge traditions are grounded in local knowledge, in traditional lore and in story means that on questions of importance to them Indigenous peoples cannot speak. It means they have to listen to others who “know better” because the propositions they utter have the form of science.’ Hence, Wills concludes that, in the Canadian context, the privileging of science over Indigenous knowledge ‘is viciously exploitative and intended to keep indigenous peoples in a place of dependency and inferiority’ (Wills 2018: 33-4).

There is ample historical and ethnographic evidence available to support Wills’s claims. John Sandlos, for example, has shown how the Canadian state, from the late 19th century to around 1970, used wildlife science as a ‘coercive’ and ‘totalizing influence’ in order to assert administrative control over Indigenous lives and lands in Northern Canada (Sandlos 2007: 241, 242).

Paul Nadasdy, in turn, has argued that more recent attempts by the Canadian state to establish wildlife co-management relationships with Indigenous groups are but ‘subtle extensions of empire, replacing local Aboriginal ways of talking, thinking and acting with those specifically sanctioned by the state’ (Nadasdy 2005: 228). The suspicions of Wills’s Mi’kmaw students are thus well justified by decades of Canadian state colonial practice.

Yet Indigenous peoples in Canada have also pointed out that, while this may be most of the story, it is not the whole story. For example, Wills cites Deborah Simmons in support of his argument that the Canadian state uses science to silence Indigenous voices (Wills 2018: 33n4). Simmons certainly does condemn the colonial use of science in the article Wills cites, but she also writes: ‘I’ve seen moments when there is truly a hunger for new knowledge shared by indigenous people and scientists, and cross-cultural barriers are overcome to discuss research questions and interpret results from the two distinct processes of knowledge production’ (Simmons 2010).

Precious Signs of Hope Amid Conflict

In the haystack of Canada’s ongoing colonial legacy, it can often be very difficult to detect such slivers of co-operation between scientists and Indigenous peoples. For example, after three decades of periodic field work among the James Bay Cree, Harvey Feit still found it difficult to accept Cree claims that they had once enjoyed a long-term, mutually beneficial relationship with the Canadian state in respect of wildlife management in their traditional hunting territories. But when Feit finally went into the archives, he discovered that it was true (Feit 2005: 269; see also the discussion in Kochan 2015: 9-10).

In a workshop titled Research the Indigenous Way, part of the 2009 Northern Governance and Policy Research Conference, held in Yellowknife, Northwest Territories, participants affirmed that ‘Indigenous people have always been engaged in research processes as part of their ethical “responsibility to keep the land alive”’ (McGregor et al. 2010: 102). At the same time, participants also recognised Indigenous peoples’ ‘deep suspicion’ of research as a vehicle for colonial exploitation (McGregor et al. 2010: 118).

Yet, within this conflicted existential space, workshop participants still insisted that there had been, in the last 40 years, many instances of successful collaborative research between Indigenous and non-Indigenous practitioners in the Canadian North. According to one participant, Alestine Andre, these collaborations, although now often overlooked, ‘empowered and instilled a sense of well-being, mental, physical, emotional, spiritual good health in their Elders, youth and community people’ (McGregor et al. 2010: 108).

At the close of the workshop, participants recommended that research not be rejected, but instead indigenised, that is, put into the hands of Indigenous practitioners ‘who bear unique skills for working in the negotiated space that bridges into and from scientific and bureaucratic ways of knowing’ (McGregor et al. 2010: 119). Indigenised research should both assert and strengthen Indigenous rights and self-government.

Furthermore, within this indigenised research context, ‘there is a role for supportive and knowledgeable non-Indigenous researchers, but […] these would be considered “resource people” whose imported research interests and methods are supplementary to the core questions and approach’ (McGregor et al. 2010: 119).

Becoming a non-Indigenous ‘resource person’ in the context of decolonising science can be challenging work, and may offer little professional reward. As American archaeologist, George Nicholas, observes, it ‘requires more stamina and thicker skin than most of us, including myself, are generally comfortable with – and it can even be harmful, whether one is applying for permission to work on tribal lands or seeking academic tenure’ (Nicholas 2004: 32).

Indigenous scholar Michael Marker, at the University of British Columbia, has likewise suggested that such research collaborations require patience: in short, ‘don’t rush!’ (cited by Wylie 2018). Carly Dokis and Benjamin Kelly, both of whom study Indigenous water-management practices in Northern Ontario, also emphasise the importance of listening, of ‘letting go of your own timetable and relinquishing control of your project’ (Dokis & Kelly 2014: 2). Together with community-based researchers, Dokis and Kelly are exploring new research methodologies, above all the use of ‘storycircles’ (https://faculty.nipissingu.ca/carlyd/research/).

Such research methods are also being developed elsewhere in Canada. The 2009 Research the Indigenous Way workshop, mentioned above, was structured as a ‘sharing circle,’ a format that, according to the workshop facilitators, ‘reflect[ed] the research paradigm being talked about’ (McGregor et al. 2010: 101). Similarly, the 13th North American Caribou Workshop a year later, in Winnipeg, Manitoba, included an ‘Aboriginal talking circle,’ in which experiences and ideas about caribou research were shared over the course of one and a half days. The ‘relaxed pace’ of the talking circle ‘allowed for a gradual process of relationship-building among the broad spectrum of Aboriginal nations, while providing a scoping of key issues in caribou research and stewardship’ (Simmons et al. 2012: 18).

Overcoming a Rational Suspicion

One observation shared by many participants in the caribou talking circle was the absence of Indigenous youth in scientific discussions. According to the facilitators, an important lesson learned from the workshop was that youth need to be part of present and future caribou research in order for Indigenous knowledge to survive (Simmons et al. 2012: 19).

This problem spans the country and all scientific fields. As Indigenous science specialist Leroy Little Bear notes, the Canadian Royal Commission on Aboriginal Peoples (1991-1996) ‘found consistent criticism among Aboriginal people in the lack of curricula in schools that were complimentary to Aboriginal peoples’ (Little Bear 2009: 17).

This returns us to Wills’s Mi’kmaw students at the Grenfell Campus in Corner Brook. A crucial element in decolonising scientific research in Canada is the encouragement of Indigenous youth interest in scientific ways of knowing nature. Wills’s observation that Mi’kmaw students harbour a keen suspicion of science as an instrument of colonial oppression points up a major obstacle to this community process. Under present circumstances, Indigenous students are more likely to drop out of, rather than to tune into, the science curricula being taught at their schools and universities.

Mi’kmaw educators and scholars are acutely aware of this problem, and they have worked assiduously to overcome it. In the 1990s, a grass-roots initiative between members of the Mi’kmaw Eskasoni First Nation and a handful of scientists at nearby Cape Breton University (CBU), in Nova Scotia, began to develop and promote a new ‘Integrative Science’ programme for CBU’s syllabus. Their goal was to reverse the almost complete absence of Indigenous students in CBU’s science-based courses by including Mi’kmaw and other Indigenous knowledges alongside mainstream science within the CBU curriculum (Bartlett et al. 2012: 333; see also Hatcher et al. 2009).

In Fall Term 2001, Integrative Science (in Mi’kmaw, Toqwa’tu’kl Kjijitaqnn, or ‘bringing our knowledges together’) became an accredited university degree programme within CBU’s already established 4-year Bachelor of Science Community Studies (BScCS) degree (see: http://www.integrativescience.ca). In 2008, however, the suite of courses around which the programme had been built was disarticulated from both the BScSC and the Integrative Science concentration, and was instead offered within ‘access programming’ for Indigenous students expressing interest in a Bachelor of Arts degree. The content of the courses was also shifted to mainstream science (Bartlett et al. 2012: 333).

Throughout its 7-year existence, the Integrative Science academic programme faced controversy within CBU; it was never assigned a formal home department or budget (Bartlett et al. 2012: 333). Nevertheless, the programme succeeded in meeting its original goal. Over those 7 years, 27 Mi’kmaw students with some programme affiliation graduated with a science or science-related degree, 13 of them with a BScSC concentration in Integrative Science.

In 2012, most of these 13 graduates held key service positions within their home communities (e.g., school principal, research scientist or assistant, job coach, natural resource manager, nurse, teacher). These numbers compare favourably with the fewer than 5 Indigenous students who graduated with a science or science-related degree, unaffiliated with Integrative Science, both before and during the life of the programme (Bartlett et al. 2012: 334). All told, up to 2007, about 100 Mi’kmaw students had participated in first-year Integrative Science courses at CBU (Bartlett et al. 2012: 334).

From its inception, Integrative Science operated under an axe, facing, among other things, chronic ‘inconsistencies and insufficiencies at the administrative, faculty, budgetary and recruitment levels’ (Bartlett 2012: 38). One could lament its demise as yet one more example of the colonialism that Wills has brought to our attention in respect of the Grenfell Campus in Corner Brook. Yet it is important to note that the culprit here was not science, as such, but a technocratic – perhaps scientistic – university bureaucracy. In any case, it seems inadequate to chalk up the travails of Integrative Science to an indiscriminate search for administrative ‘efficiencies’ when the overall nation-state context was and is, in my opinion, a discriminatory one.

When Seeds Are Planted, Change Can Come

But this is not the note on which I would like to conclude. To repeat, up to 2007, about 100 Mi’kmaw students had participated in first-year Integrative Science courses. That is about 100 Mi’kmaw students who are, presumably, less likely to hold the firmly negative attitude towards science that Wills has witnessed among his own Mi’kmaw students in Newfoundland.

As I wrote above, in the haystack of Canada’s ongoing colonial legacy, it can be very difficult to detect those rare slivers of co-operation between scientists and Indigenous peoples on which I have here tried to shine a light. If this light were allowed to go out, a sense of hopelessness could follow, and then an allegedly hard border between scientific and Indigenous knowledges may suddenly spring up and appear inevitable, if also, for some, lamentable.

Let me end with the words of Albert Marshall, who, at least up to 2012, was the designated voice on environmental matters for Mi’kmaw Elders in Unama’ki (Cape Breton), as well as a member of the Moose Clan. Marshall was a key founder and constant shepherd of CBU’s Integrative Science degree programme. One last time: some 100 Mi’kmaw students participated in that programme during its brief life. Paraphrased by his CBU collaborator, Marilyn Iwama, Elder Marshall had this to say:

Every year, the ash tree drops its seeds on the ground. Sometimes those seeds do not germinate for two, three or even four cycles of seasons. If the conditions are not right, the seeds will not germinate. […] [Y]ou have to be content to plant seeds and wait for them to germinate. You have to wait out the period of dormancy. Which we shouldn’t confuse with death. We should trust this process. (Bartlett et al. 2015: 289)

Contact details: jwkochan@gmail.com

References

Bartlett, Cheryl (2012). ‘The Gift of Multiple Perspectives in Scholarship.’ University Affairs / Affaires universitaires 53(2): 38.

Bartlett, Cheryl, Murdena Marshall, Albert Marshall and Marilyn Iwama (2015). ‘Integrative Science and Two-Eyed Seeing: Enriching the Discussion Framework for Healthy Communities.’ In Lars K. Hallstrom, Nicholas Guehlstorf and Margot Parkes (eds), Ecosystems, Society and Health: Pathways through Diversity, Convergence and Integration (Montréal: McGill-Queens University Press), pp. 280-326.

Bartlett, Cheryl, Murdena Marshall and Albert Marshall (2012). ‘Two-Eyed Seeing and Other Lessons Learned within a Co-Learning Journey of Bringing Together Indigenous and Mainstream Knowledges and Ways of Knowing.’ Journal of Environmental Studies and Sciences 2: 331-340.

Dokis, Carly and Benjamin Kelly (2014). ‘Learning to Listen: Reflections on Fieldwork in First Nation Communities in Canada.’ Canadian Association of Research Ethics Boards Pre and Post (Sept): 2-3.

Feit, Harvey A. (2005). ‘Re-Cognizing Co-Management as Co-Governance: Visions and Histories of Conservation at James Bay.’ Anthropologica 47: 267-288.

Hatcher, Annamarie, Cheryl Bartlett, Albert Marshall and Murdena Marshall (2009). ‘Two-Eyed Seeing in the Classroom Environment: Concepts, Approaches, and Challenges.’ Canadian Journal of Science, Mathematics and Technology Education 9(3): 141-153.

Kochan, Jeff (2015). ‘Objective Styles in Northern Field Science.’ Studies in the History and Philosophy of Science 52: 1-12. https://doi.org/10.1016/j.shpsa.2015.04.001

Little Bear, Leroy (2009). Naturalizing Indigenous Knowledge, Synthesis Paper. University of Saskatchewan, Aboriginal Education Research Centre, Saskatoon, Sask. and First Nations and Adult Higher Education Consortium, Calgary, Alta. https://www.afn.ca/uploads/files/education/21._2009_july_ccl-alkc_leroy_littlebear_naturalizing_indigenous_knowledge-report.pdf  [Accessed 05 November 2018]

McGregor, Deborah, Walter Bayha & Deborah Simmons (2010). ‘“Our Responsibility to Keep the Land Alive”: Voices of Northern Indigenous Researchers.’ Pimatisiwin: A Journal of Aboriginal and Indigenous Community Health 8(1): 101-123.

Nadasdy, Paul (2005). ‘The Anti-Politics of TEK: The Institutionalization of Co-Management Discourse and Practice.’ Anthropologica 47: 215-232.

Nicholas, George (2004). ‘What Do I Really Want from a Relationship with Native Americans?’ The SAA Archaeological Record (May): 29-33.

Sandlos, John (2007). Hunters at the Margin: Native People and Wildlife Conservation in the Northwest Territories (Vancouver: UBC Press).

Simmons, Deborah (2010). ‘Residual Stalinism.’ Upping the Anti #11. http://uppingtheanti.org/journal/article/11-residual-stalinism [Accessed 01 November 2018]

Simmons, Deborah, Walter Bayha, Danny Beaulieu, Daniel Gladu & Micheline Manseau (2012). ‘Aboriginal Talking Circle: Aboriginal Perspectives on Caribou Conservation (13th North American Caribou Workshop).’ Rangifer, Special Issue #20: 17-19.

Wills, Bernard (2018). ‘Weak Scientism: The Prosecution Rests.’ Social Epistemology Review & Reply Collective 7(10): 31-36.

Wylie, Alison (2018). ‘Witnessing and Translating: The Indigenous/Science Project.’ Keynote address at the workshop Philosophy, Archaeology and Community Perspectives: Finding New Ground, University of Konstanz, 22 October 2018.

 

Author Information: Arianna Falbo, Brown University, Arianna_Falbo@brown.edu.

Falbo, Arianna. “Spitting Out the Kool-Aid: A Review of Kate Manne’s Down Girl: The Logic of Misogyny.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 12-17.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40A

The years of far-right rhetoric about Hillary Clinton have formed a real-time theatre of misogyny, climaxing at the 2016 Presidential election.
Image by DonkeyHotey via Flickr / Creative Commons

 

Kate Manne’s Down Girl breathes new life into an underexplored yet urgently important topic. Using a diverse mixture of current events, empirical findings, and literary illustrations, Manne guides her reader through the underbelly of misogyny: its nature, how it relates to and differs from sexism, and why, in supposedly post-patriarchal societies, it’s “still a thing.”[1]

Chapter 1 challenges the standard dictionary-definition or “naïve conception” of misogyny, as Manne calls it. This view understands misogyny primarily as a psychological phenomenon, operative in the minds of men. Accordingly, misogynists are disposed to hate all or most women because they are women.

The naïve conception fails because it renders misogyny virtually non-existent and, as a result, politically inert. Misogynists need not feel hatred towards all or even most women. A misogynist may love his mother or other women with whom he shares close personal relationships. Manne insists that this should not detract from his being an outright misogynist. For example, the naïve view fails to make sense of how Donald Trump could both love his daughter while simultaneously being misogyny’s poster boy. A different analysis is needed.

Following Haslanger (2012), Manne outlines her “ameliorative” project in chapter 2. She aims to offer an analysis of misogyny that is politically and theoretically useful; an analysis that will help to reveal the stealthy ways misogyny operates upon its perpetrators, targets, and victims. Manne argues that misogyny should be understood in terms of its social function: what it does to women and girls.

On her view misogyny functions to uphold patriarchal order, it punishes women who transgress and rewards those who abide.[2] Misogyny is thus selective: it does not target all women wholesale, but prioritizes for those who protest against patriarchal prescriptions. In Manne’s words: “misogyny primarily targets women because they are women in a man’s world…rather than because they are women in a man’s mind.[3]

Chapter 3 outlines, what I take to be, one of the most original and illuminating insights of the book, a conceptual contrast between sexism and misogyny. Manne dubs sexism the “justificatory” branch of patriarchal order: it has the job of legitimizing patriarchal norms and gender roles. Misogyny, on the other hand, is the “law enforcement” branch: it patrols and upholds patriarchal order. Both misogyny and sexism are unified by a common goal “to maintain or restore a patriarchal social order.”[4]

In Chapter 4, Manne discusses the gender coded give/take economy that she takes to be at the heart of misogyny’s operation.[5] Patriarchal order dictates that women have an obligation to be givers of certain feminine-coded goods and services such as affection, sex, and reproductive labour.

Correspondingly, men are the entitled recipients of these goods and services in addition to being the takers of certain masculine-coded privileges, including public influence, honour, power, money, and leadership. When men fail to receive these feminine-coded goods, which patriarchal order deems they are entitled to, backlash may ensue. What’s more, women who seek masculine-coded privileges, for example, leadership positions or other forms of power and prestige, are in effect violating a patriarchal prohibition. Such goods are not theirs for the taking—women are not entitled takers, but obligated givers.

In chapter 5, Manne considers a popular “humanist” kind of view according to which misogyny involves thinking of women as sub-human, non-persons, lifeless objects, or mere things. She turns this view on its head. She argues that: “her personhood is held to be owed to others in the form of service labour, love, and loyalty.”[6] As per the previous chapter, women are socially positioned as human givers. Manne’s contends that misogyny is not about dehumanization, but about men feeling entitled to the human service of women. She pushes this even further by noting that in some cases, when feminine-coded human goods and services are denied, it is men who may face feelings of dehumanization.[7]

Chapter 6, in my opinion, is where a lot of the action happens. In this chapter Manne presents the much-needed concept of himpathy: the undue sympathy that is misdirected away from victims and towards perpetrators of misogynistic violence.[8] She explains how certain exonerating narratives, such as the “the golden boy”, function to benefit highly privileged (normally: white, non-disabled, cis, heterosexual, etc.) men who commit violent acts against women.[9]

In this chapter Manne also draws upon and adds to the growing literature on testimonial injustice. Testimonial injustice occurs when a speaker receives either a deficit or surplus of creditability owing to a prejudice on the part of the hearer.[10] Manne discusses how in cases of he said/she said testimony involving accusations of sexual assault, privileged men may be afforded excess creditability, thereby undermining the creditability of victims – there is only so much creditability to go around.[11]

This, she notes, may lead to the complete erasure, or “herasure” as Manne calls it, of the victim’s story altogether.[12] Creditability surpluses and deficits, she says: “often serve the function of buttressing dominant group members’ current social position, and protecting them from downfall in the existing social hierarchy.”[13] Exonerating narratives puff up privileged men and, as a result, deflate the creditability of women who speak out against them. These unjust distributions of creditability safeguarding dominate men against downward social mobility. In a slogan: “testimonial injustice as hierarchy preservation.”[14]

In Chapter 7, Manne discusses why victims of misogynistic violence who seek moral support and attention are regularly met with suspicion, threats, and outright disbelief. Patriarchy dictates that women are human givers of moral support and attention, not recipients (as per the arguments of chapter 4). Drawing moral attention towards women who are victimized by misogyny attempts to disrupt patriarchy’s divisions of moral labour. Manne says that this is “tantamount to the server asking for service, the giver expecting to receive…it is withholding a resource and simultaneously demanding it.”[15]

In chapter 8, Manne explores how misogyny contributed to Hillary Clinton’s loss of the 2016 US presidential election. She claims that misogyny routinely targets women who infringe upon man’s historical turf; women who try to take what patriarchal order decrees as the jobs and privileges reserved for men. Overstepping or trespassing upon his territory often results in misogynistic retaliation. Such women are seen as “greedy, grasping, and domineering; shrill and abrasive; corrupt and untrustworthy”[16] or, in the words of the current President of the United States, “nasty.”[17]

Down Girl ends by discussing the prospects of overcoming misogyny. At one point Manne says, as if to shrug her shoulders and throw up her arms in despair: “I give up.”[18] Later, in a subsequent interview, Manne claims she did not intend for this to be a discouraging statement, but a “liberating declaration.”[19] It is an expression of her entitlement to bow out of this discussion (for now), after having said her piece and making conversational space for others to continue.

In my opinion, Down Girl is essential reading for any serious feminist, moral, or political scholar. The proposed analysis of misogyny is lucid and accessible while at the same time remaining acutely critical and rigorous. The text does not get bogged down in philosophical jargon or tedious digressions. As such, this book would be fairly congenial to even the philosophically uninitiated reader. I highly recommend it to both academics and non-academic alike. Moreover, Manne’s addition of “himpathy” and “herasure” to the philosophical lexicon helps to push the dialectic forward in innovative and insightful ways.

Despite being on such a sombre and depressing topic, I found this book to be engrossing and, for the most part, enjoyable to read. Manne has an inviting writing style and the book is scattered with a number of brilliant quips, clever examples, and gripping case studies.  Though, be warned, there are certainly sections that might reasonably be difficult, uncomfortable, and potentially triggering. Down Girl examines some of the most fraught and downright chilling aspects of our current social and political atmosphere; including real life depictions of horrific violence against women, as well as the attendant sympathy (himpathy) that is often given to those who perpetrate it. This is to be expected in a book on the logic of misogyny, but it is nonetheless important for readers to be extra cognisant.

After finishing the book, I have one main concern regarding the explanatory reach of the analysis. Recall that on Manne’s account: “misogyny’s primary function and constitutive manifestation is the punishment of “bad” women, and policing of women’s behavior.”[20] Misogyny’s operation consist in a number of “down girl moves” designed to keep women in line when they fail to “know their place” in a man’s world.[21] She emphasizes the retaliatory nature of misogyny; how it functions analogously to a shock collar: fail to do as patriarchy demands as and risk being shocked.[22]

I worry, though, that this emphasis on punishing patriarchy’s rebels fails to draw adequate attention to how misogyny can target women for what appears to be nothing more than the simple reason that he is dominant over her. It is not only rebels who are misogyny’s targets and victims, but also patriarchy’s cheerleaders and “good” girls. (Though, those who protest are presumably more vulnerable and have greater targets on their backs.)

Perhaps the analogy is better thought of not in terms of him shocking her when she fails to obey patriarchal order, but him administering shocks whenever he sees fit, be it for a perceived failure of obedience or simply because he is the one with the controller. Or, to use another analogy that picks up on Manne’s “policing” and “law enforcement” language, maybe misogyny is characterized best as a crooked cop, one who will pull you over for a traffic violation, but also one who will stop you simply because he feels he can, for he is the one with the badge and gun.

A woman might play her role in a man’s world to a tee; she may be happily complacent, she may give him all of her feminine-coded goods, in the right manner, in the right amount, at the right time, and so on. She may never threaten to overstep historical gender roles, nor does she attempt to cultivate masculine-coded privileges. She may even add fuel to patriarchy’s fire by policing other women who disobey. Even still, despite being on her very best behaviour, she too can be victimized by misogynistic violence. Why? It remains unclear to me how Manne’s analysis could offer a satisfying answer. While I deeply admire the proposal, I am curious of how it captures non-corrective cases of misogyny that don’t aim to punish for (apparent) violations of patriarchal order.

Manne notes that a major motivation for her writing is “to challenge some of the false moral conclusions we swallow with the Kool-Aid of patriarchal ideology.”[23] I came away from this book having learned a great deal about the insidious ways misogyny operates to put women and girls down; many a Kool-Aid has been spit out. Down Girl also plants fertile seeds for future research on misogyny, a topic desperately in need of more careful attention and intelligent investigation.

In the preface Manne says that: “ultimately, it will take a village of theorists to gain a full understanding of the phenomena.”[24] This book makes headway in offering theorists a myriad of conceptual tools and resources needed to facilitate and push the discussion forward. I anticipate that Down Girl will be a notable benchmark for many fruitful discussions to come.

Contact details: Arianna_Falbo@brown.edu

References

Berenson, Tessa. “Presidential Debate: Trump Calls Clinton ‘Nasty Woman’.” Time, 20 Oct. 2016, time.com/4537960/donald-trump-hillary-clinton-nasty-woman-debate/.

Bullock, Penn. “Transcript: Donald Trump’s Taped Comments About Women.” The New York Times, 8 Oct. 2016, nytimes.com/2016/10/08/us/donald-trump-tape-transcript.html.

Cleary, Skye C. “It Takes Many Kinds to Dismantle a Patriarchal Village.” Los Angeles Review of Books, 2 Mar. 2018, lareviewofbooks.org/article/takes-many-kinds-dismantle-patriarchal-village/.

Davis, Emmalon. “Typecasts, Tokens, and Spokespersons: A Case for Credibility Excess as Testimonial Injustice” Hypatia, 2016.

Fricker, Miranda. Epistemic Injustice Power and the Ethics of Knowing. New York: Oxford University Press, 2007.

Haslanger, Sally. Resisting Reality. New York: Oxford University Press, 2012.

Manne, Kate. Down Girl: The Logic of Misogyny. New York: Oxford University Press, 2017.

Medina, José. The Epistemology of Resistance. New York: Oxford University Press, 2012.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary” Social Epistemology, 2011.

Penaula, Regan. “Kate Manne: The Shock Collar That Is Misogyny” Guernica, 7 Feb. 2018, https://www.guernicamag.com/kate-manne-why-misogyny-isnt-really-about-hating-women/.

Yap, Audre. “Creditability Excess and the Social Imaginary in Cases of Sexual Assault.” Feminist Philosophy Quarterly, 2017.

[1] Manne (2017): xxi.

[2] Manne (2017): 72.

[3] Ibid: 69.

[4] Ibid: 80.

[5] At least as it is manifests in the cultures of the United States, the United Kingdom, and Australia, these are the focus of Manne’s analysis. Cf. ibid: fn. 3.

[6] Ibid: 173.

[7] Ibid: 173.

[8] Ibid: 197.

[9] Ibid: 197.

[10] Cf. Fricker (2007), though, Fricker focuses primarily upon creditability deficits. See, Davis (2016), Medina (2011, 2012), and Yap (2017), among others, for discussions of how creditability surpluses can also constitute testimonial injustice.

[11] See Manne’s discussion of Medina (2011) who stresses this point, 190.

[12] Ibid: 209-14.

[13] Manne (2017): 194.

[14] Ibid: 185.

[15] Ibid: 304.

[16] Ibid: 303.

[17] Berenson (2016).

[18] Manne (2017): 300.

[19] Cleary (2018).

[20] Manne (2017): 192.

[21] Ibid: 68.

[22] Cf. Penaluna (2018).

[23] This is from an interview with Los Angeles Review of Books; see Cleary (2018).

[24] Manne (2017): xiii.

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Paul Faulkner, University of Sheffield, paul.faulkner@sheffield.ac.uk

Faulkner, Paul. “Fake Barns, Fake News.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 16-21.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Y4

Image by Kathryn via Flickr / Creative Commons

 

The Twitter feed of Donald Trump regularly employs the hashtag #FakeNews, and refers to mainstream news outlets — The New York Times, CNN etc. — as #FakeNews media. Here is an example from May 28, 2017.

Whenever you see the words ‘sources say’ in the fake news media, and they don’t mention names …

… it is very possible that those sources don’t exist but are made up by the fake news writers. #FakeNews is the enemy!

It is my opinion that many of the leaks coming out of the White House are fabricated lies made up by the #FakeNews media.[1]

Lies and Falsehoods

Now it is undoubted that both fake news items and fake news media exist. A famous example of the former is the BBC Panorama broadcast about spaghetti growers on April Fool’s Day, 1957.[2] A more recent, and notorious example of the latter is the website ChristianTimesNewspaper.com set up by Cameron Harris to capitalise on Donald Trump’s support during the election campaign (See Shane 2017).

This website published exclusively fake news items; items such as “Hillary Clinton Blames Racism for Cincinnati Gorilla’s Death”, “NYPD Looking to Press Charges Against Bill Clinton for Underage Sex Ring”, and “Protestors Beat Homeless Veteran to Death in Philadelphia”. And it found commercial success with the headline: “BREAKING: ‘Tens of thousands’ of fraudulent Clinton votes found in Ohio warehouse”. This story was eventually shared with six million people and gained widespread traction, which persisted even after it was shown to be fake.

Fake news items and fake news media exist. However, this paper is not interested in this fact so much as the fact that President Trumps regularly calls real news items fake, and calls the established news media the fake news media. These aspersions are intended to discredit news items and media. And they have had some remarkable success in doing so: Trump’s support has shown a good resistance to the negative press Trump has received in the mainstream media (Johson 2017).

Moreover, there is some epistemological logic to this: these aspersions insinuate a skeptical argument, and, irrespective of its philosophical merits, this skeptical argument is easy to latch onto and hard to dispel. An unexpected consequence of agreeing with Trump’s aspersions is that these aspersions can themselves be epistemologically rationalized. This paper seeks to develop these claims.

An Illustration from the Heartlands

To start, consider what is required for knowledge. While there is substantial disagreement about the nature of knowledge — finding sufficient conditions is difficult — there is substantial agreement on what is required for knowledge. In order to know: (1) you have to have got it right; (2) it cannot be that you are likely to have got it wrong; and (3) you cannot think that you are likely to have got it wrong. Consider these three necessary conditions on knowledge.

You have to have got it right. This is the most straightforward requirement: knowledge is factive; ‘S knows that p’ entails ‘p’. You cannot know falsehoods, only mistakenly think that you know them. So if you see what looks to you to be a barn on the hill and believe that there is a barn on the hill, you fail to know that there is a barn on the hill if what you are looking at is in fact a barn façade — a fake barn.

It cannot be that you are likely to have got it wrong. This idea is variously expressed in the claims that there is a reliability (Goldman 1979), sensitivity (Nozick 1981), safety (Sosa 2007), or anti-luck (Zagzebski 1994) condition on knowing. That there is such a condition has been acknowledged by epistemologists of an internalist persuasion, (Alston 1985, Peacocke 1986). And it is illustrated by the subject’s failure to know in the fake barn case (Goldman 1976). This case runs as follows.

Image by Sonja via Flickr / Creative commons

 

Henry is driving through the countryside, sees a barn on the hill, and forms the belief that there is a barn on the hill. Ordinarily, seeing that there is a barn on the hill would enable Henry to know that there is a barn on the hill. But the countryside Henry is driving through is peculiar in that there is a proliferation of barn façades — fake barns — and Henry, from the perspective of the highway, cannot tell a genuine barn from a fake barn.

It follows that he would equally form the belief that there is a barn on the hill if he were looking at a fake barn. So his belief that there is a barn on the hill is as likely to be wrong as right. And since it is likely that he has got it wrong, he doesn’t know that there is a barn on the hill. (And he doesn’t know this even though he is looking at a barn on the hill!)

You cannot think that you are likely to have got it wrong. This condition can equally be illustrated by the fake barns case. Suppose Henry learns, say from a guidebook to this part of the countryside, that fake barns are common in this area. In this case, he would no longer believe, on seeing a barn on the hill, that there was a barn on the hill. Rather, he would retreat to the more cautious belief that there was something that looked like a barn on the hill, which might be a barn or might be a barn façade. Or at least this is the epistemically correct response to this revelation.

And were Henry to persist in his belief that there is a barn on the hill, there would be something epistemically wrong with this belief; it would be unreasonable, or unjustified. Such a belief, it is then commonly held, could not amount to knowledge, (Sosa 2007). Notice: the truth of Henry’s worry about the existence of fake barns doesn’t matter here. Even if the guidebook is a tissue of falsehoods and there are no fake barns, once Henry believes that fake barns abound, it ceases to be reasonable to believe that a seen barn on the hill is in fact a barn on the hill.

Truth’s Resilience: A Mansion on a Hill

The fake barns case centres on a case of acquiring knowledge by perception: getting to know that there is a barn on the hill by seeing that there is a barn on the hill. Or, more generally: getting to know that p by seeing that p. The issue of fake news centres on our capacity to acquire knowledge from testimony: getting to know that p by being told that p. Ordinarily, testimony, like perception, is a way of acquiring knowledge about the world: just as seeing that p is ordinarily a way of knowing that p, so too is being told that p. And like perception, this capacity for acquiring knowledge can be disrupted by fakery.

This is because the requirements on knowledge stated above are general requirements — they are not specific to the perceptual case. Applying these requirements to the issue of fake news then reveals the following.

You have to have got it right. From this it follows that there is no knowledge to be got from the fake news item. One cannot get to know that the Swiss spaghetti harvesters had a poor year in 1957, or that Randall Prince stumbled across the ballot boxes. If it is fake news that p, one cannot get to know that p, any more than one can get to know that there is a barn on a hill when the only thing on the hill is a fake. One can get to know other things: that Panorama said that such and such; or that the Christian Times Newspaper said that such and such. But one cannot get to know the content said.

It cannot be that you are likely to have got it wrong. To see what follows from this, suppose that President Trump is correct and the mainstream news media is really the fake news media. On this supposition, most of the news items published by this news media are fake news items. The epistemic position of a consumer of news media is then parallel to Henry’s epistemic position in driving through fake barn country. Even if Henry is looking at a (genuine) barn on the hill, he is not in a position to know that there is a barn on the hill given that he is in fake barn country and, as such, is as likely wrong as right with respect to his belief that there is a barn on the hill.

Similarly, even if the news item that p is genuine and not fake, a news consumer is not in a position to get to know that p insofar as fakes abound and their belief that p is equally likely to be wrong as right. This parallel assumes that the epistemic subject cannot tell real from fake. This supposition is built into the fake barn case: from the road Henry cannot discriminate real from fake barns. And it follows in the fake news case from supposition that President Trump is correct in his aspersions.

That is, if it is really true that The New York Times and CNN are fake news media, as supposed, then this shows the ordinary news consumer is wrong to discriminate between these news media and Christian Newspaper Times, say. And it thereby shows that the ordinary news consumer possesses the same insensitivity to fake news items that Henry possesses to fake barns. So if President Trump is correct, there is no knowledge to be had from the mainstream news media. Of course, he is not correct: these are aspersions not statements of fact. However, even aspersions can be epistemically undermining as can be seen next.

You cannot think that you are likely to have got it wrong. Thus, in the fake barns case, if Henry believes that fake barns proliferate, he cannot know there is a barn on the hill on the basis of seeing one. The truth of Henry’s belief is immaterial to this conclusion. Now let ‘Trump’s supporters’ refer to those who accept Trump’s aspersions of the mainstream news media. Trump’s supporters thereby believe that mainstream news items concerning Trump are fake news items, and believe more generally that these news media are fake news media (at least when it comes to Trump-related news items).

It follows that a Trump supporter cannot acquire knowledge from the mainstream news media when the news is about Trump. And it also follows that Trump supporters are being quite epistemically reasonable in their rejection of mainstream news stories about Trump. (One might counter, ‘at least insofar as their starting point is epistemically reasonable’; but it will turn out below that an epistemological rationalization can be given of this starting point.)

Image by Sonja via Flickr / Creative Commons

 

Always Already Inescapably Trapped

Moreover, arguably it is not just the reasonableness of accepting mainstream news stories about Trump that is undermined because Trump’s aspersions insinuate the following skeptical argument. Suppose again that Trump’s aspersions of the mainstream news media are correct, and call this the fake news hypothesis. Given the fake news hypothesis it follows that we lack the capacity to discriminate fake news items from real news items. Given the fake news hypothesis combined with this discriminative incapacity, the mainstream news media is not a source of knowledge about Trump; that is, it is not a source of knowledge about Trump even if its news items are known and presented as such.

At this point, skeptical logic kicks in. To illustrate this, consider the skeptical hypothesis that one is a brain-in-a-vat. Were one a brain-in-vat, perception would not be a source of knowledge. So insofar as one thinks that perception is a source of knowledge, one needs a reason to reject the skeptical hypothesis. But any reason one ordinarily has, one lacks under the supposition that the skeptical hypothesis is true. Thus, merely entertaining the skeptical hypothesis as true threatens to dislodge one’s claim to perceptual knowledge.

Similarly, the fake news hypothesis entails that the mainstream news media is not a source of knowledge about Trump. Since this conclusion is epistemically unpalatable, one needs a reason to reject the fake news hypothesis. Specifically, one needs a reason for thinking that one can discriminate real Trump-related news items from fake ones. But the reasons one ordinarily has for this judgement are undermined by the supposition that the fake news hypothesis is true.

Thus, merely entertaining this hypothesis as true threatens to dislodge one’s claim to mainstream news-based knowledge about Trump. Three things follow. First, Trump supporters’ endorsement of the fake news hypothesis does not merely make it reasonable to reject mainstream media claims about Trump—by the fake barns logic—this endorsement further supports a quite general epistemic distrust on the mainstream news media—by this skeptical reasoning. (It is not just that the mainstream news media conveys #FakeNews, it is the #FakeNews Media.)

Second, through presenting the fake news hypothesis, Trump’s aspersions of mainstream media encourage us to entertain a hypothesis that insinuates a skeptical argument with this radical conclusion. And if any conclusion can be drawn from philosophical debate on skepticism, it is that it is hard to refute sceptical reasoning once one is in the grip of it. Third, what is thereby threatened is both our capacity to acquire Trump-related knowledge that would ground political criticism, and our epistemic reliance on the institution that provides a platform for political criticism. Given these epistemic rewards, Trump’s aspersions of the mainstream news media have a clear political motivation.

Aspersions on the Knowledge of the People

However, I’d like to end by considering their epistemic motivation. Aren’t groundless accusations of fakery straightforwardly epistemically unreasonable? Doesn’t the fake news hypothesis have as much to recommend it as the skeptical hypothesis that one is a brain-in-a-vat? That is, to say doesn’t it have very little to recommend it? Putting aside defences of the epistemic rationality of skepticism, the answer is still equivocal. From one perspective: yes, these declarations of fakery have little epistemic support.

This is the perspective of the enquirer. Supposing a given news item addresses the question of whether p, then where the news item declares p, Trump declares not-p. The epistemic credentials of these declarations then come down to which tracks matters of evidence etc., and while each case would need to be considered individually, it would be reasonable to speculate that the cannons of mainstream journalism are the epistemically superior.

However, from another perspective: no, these declarations of fakery are epistemically motivated. This is the perspective of the believer. For suppose that one is a Trump supporter, as Trump clearly is, and so believes the fake news hypothesis. Given this hypothesis, the truth of a mainstream news item about Trump is immaterial to the epistemic standing of a news consumer. Even if the news item is true, the news consumer can no more learn from it than Henry can get to know that there is a barn on the hill by looking at one.

But if the truth of a Trump-related news item is immaterial to the epistemic standing of a news consumer, then it seems that epistemically, when it comes to Trump-related news, the truth simply doesn’t matter. But to the extent that the truth doesn’t matter, there really is no distinction to be drawn between the mainstream media and the fake news media when it comes to Trump-related news items. Thus, there is a sense in which the fake news hypothesis is epistemically self-supporting.

Contact details: paul.faulkner@sheffield.ac.uk

References

Alston, W. 1985. “Concepts of Justification”. The Monist 68 (1).

Johnson, J. and Weigel, D. 2017. “Trump supporters see a successful president — and are frustrated with critics who don’t”. The Washington Post. 2017. Available from http://wapo.st/2lkwi96.

Goldman, Alvin. 1976. “Discrimination and Perceptual Knowledge”. Journal of Philosophy 73:771-791.

Goldman, Alvin 1979. “What Is Justified Belief?”. In Justification and Knowledge, edited by G. S. Pappas. Dordrecht: D.Reidel.

Nozick, R. 1981. Philosophical Explanations. Cambridge, MA.: Harvard University Press.

Peacocke, C. 1986. Thoughts: An Essay on Content. Oxford: Basil Blackwell.

Shane, Scott. “From Headline to Photograph, a Fake News Masterpiece”. The New York Times 2017. Available from https://nyti.ms/2jyOcpR.

Sosa, Ernest. 2007. A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume 1. Oxford: Clarendon Press.

Zagzebski, L. 1994. “The Inescapability of Gettier Problems”. The Philosophical Quarterly 44 (174):65-73.

[1] See <https://twitter.com/realDonaldTrump&gt;.

[2] See <http://news.bbc.co.uk/onthisday/hi/dates/stories/april/1/newsid_2819000/2819261.stm&gt;.

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).