Archives For Valerie Joly Chock

Author Information: Fabien Medvecky, University of Otago, fabien.medvecky@otago.ac.nz.

Medvecky, Fabien. “Institutionalised Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 15-20.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-46m

A graffiti mural that was, and may even still be, on Maybachufer Strasse in Kreuzberg, Berlin.
Image by Igal Malis via Flicker / Creative Commons

 

This article responds to Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

In a recent paper, I argued that science communication, the “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science”, is epistemically unjust (Medvecky, 2017). Matheson and Chock disagree. Or at least, they disagree with enough of the argument to conclude that “while thought provoking and bold, Medvecky’s argument should be resisted” (Matheson & Chock, 2019). This has provided me with an opportunity to revisit some of my claims, and more importantly, to make explicit those claims that I had failed to make clear and present in the original paper. That’s what this note will do.

Matheson and Chock’s concern with the original argument is two-fold. Firstly, they argue that the original argument sinned by overreaching, and secondly, that while there might be credibility excess, such excess should not be viewed as constituting injustice. I’ll begin by outlining my original argument before tackling each of their complaints.

The Original Argument For the Epistemic Injustice of Science Communication

Taking Matheson and Chock’s formal presentation of the original argument, it runs as follows:

1. Science is not a unique and privileged field (this isn’t quite right. See below for clarification)

2. If (1), then science communication creates a credibility excess for science.

3. Science communication creates a credibility excess for science.

4. If (3), then science communication is epistemically unjust.

5. Science communication is epistemically unjust.

The original argument claimed that science was privileged in the way that its communication is institutionalised through policy and practices in a way not granted to other fields, and that fundamentally,

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment. This uniqueness creates a credibility excess for science as a field. (italic added)

Two clarificatory points are important here. Firstly, while Matheson and Chock run with premise 1, they do express some reservation. And so would I if this were the way I’d spelled it out. But I never suggested that there is nothing unique about science. There undoubtedly is, usually expressed in terms of producing especially reliable knowledge (Nowotny, 2003; Rudolph, 2014).

My original argument was that this isn’t necessarily enough to warrant special treatment when it comes to communication. As I stated then, “What we need is a reason for why reliable knowledge ought to be communicated. Why would some highly reliable information about the reproductive habits of a squid be more important to communicate to the public than (possibly less reliable) information about the structure of interest rates or the cultural habits of Sufis?” (Italic added)

In the original paper, I explicitly claimed, “We might be able to show that science is unique, but that uniqueness does not relate to communicative needs. Conversely, we can provide reasons for communicating science, but these are not unique to science.” (Medvecky, 2017)

Secondly, as noted by Matheson and Chock, the concern in the original argument revolves around “institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe.”

What maybe wasn’t made explicit was the role and importance of this institutionalization which is directed by government strategies and associated funding policies. Such policies are designed specifically and uniquely to increase public communication of and public engagement with science (MBIE, 2014).

They may mention that science should be read broadly, such as the UK’s A vision for Science and Society (DIUS, 2008) which states “By science we mean all-encompassing knowledge based on scholarship and research undertaken in the physical, biological, engineering, medical, natural and social disciplines, including the arts and humanities”. Yet the policy also claims that “These activities will deliver a coherent approach to increasing STEM skills, with a focus on improved understanding of the link between labour market needs and business demands for STEM skills and the ability of the education system to deliver flexibly into the 21st century.”

STEM (science, technology, engineering and mathematics) is explicitly not a broad view of science; it’s specifically restricted to the bio-physical science and associated fields. If science was truly meant broadly, there’d be no need to specify STEM. These policies, including their funding and support, are uniquely aimed at science as found in STEM, and it is this form of institutionalized and institutionally sponsored science communication that is the target of my argument.

With these two points in mind, let me turn to Matheson and Chock’s objections.

The Problem of Overreaching and the Marketplace of Ideas

Matheson and Chock rightly spell out my view when stating that the “fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains.” What they mistake is what I take issue with. Matheson and Chock claim, “When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists”. Of course, who wouldn’t agree with that!

For Matheson and Chock, given their assumption that science communication is equivalent to scientists communicating their science, it follows that it is only reasonable to give special attention to the subject or field one is involved in. As they say,

Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

But they’re mistakenly equating science communication with communication by scientists about their science. This suggests both a misunderstanding of my argument and a skewed view of what science communication is.

To tackle the latter first, while some science communication efforts come from scientists, science communication is much broader. Science communication is equally carried out by (non-scientist) journalists, (non-scientist) PR and communication officers, (non-scientist) policy makers, etc. Indeed, some of the most popular science communicators aren’t scientists at all, such as Bill Bryson. So the concern is not with the bakery privileging baked goods, it’s with baked goods being privileged simpliciter.

As discussed in both my original argument and in Matheson and Chock’s reply, my concern revolves around science communication institutionalized through policies and such like. And that’s where the issue is; there is institutionalised science communication, including policy with significant funding such that there can be specific communication, and that such policies exist only for the sciences. Indeed, there are no “humanities communications” governmental policies or funding strategies, for example. Science communication, unlike Matheson and Chock’s idealised bakery, doesn’t operate in anything like a free market.

Let’s take the bakery analogy and its position it in a marketplace a little further (indeed, thinking of science communication and where it sits in the market place of knowledge fits well). My argument is not that a bakery is being unjust by selling only baked goods.

My argument is that if bakeries were the only stores to receive government subsidies and tax breaks, and were, through governments and institutional intervention, granted a significantly better position in the street, then yes, this is unfair. Other goods will fail to have the same level of traction as baked goods and would be unable to compete on a just footing. This is not to say that the bakeries need to sell other goods, but rather, by benefiting from the unique subsidies, baked goods gain a marketplace advantage over goods in other domains, in the same way that scientific knowledge benefits from a credibility excess (ie epistemic marketplace advantage) over knowledge in other domains.

Credibility Excess and Systemic Injustices

The second main objection raised by Matheson and Chock turns on whether any credibility excess science might acquire in this way should be considered an injustice. They rightly point out that “mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given.”

Specifically, Matheson and Chock argue that for credibility excess to lead to injustice, this must be systemic and carry across contexts. And according to them, science communication is guilty of no such trespass (or, at the very least, my original argument fails to make the case for such).

Again, I think this comes down to how science communication is viewed. Thinking of science communication in institutionalised ways, as I intended, is indeed systemic. What Matheson and Chock have made clear is that in my original argument, I didn’t articulate clearly enough just how deeply the institutionalisation of science communication is, and how fundamentally linked with assumptions of the epistemic dominance of science this institutionalisation is. I’ll take this opportunity to provide some example of this.

Most obviously, there are nationally funded policies that aim “to develop a culture where the sciences are recognised as relevant to everyday life and where the government, business, and academic and public institutions work together with the sciences to provide a coherent approach to communicating science and its benefits”; policies backed by multi-million dollar investments from governments (DIISRTE, 2009).

Importantly, there are no equivalent for other fields. Yes, there are funds for other fields (funds for research, funds for art, etc), but not funds specifically for communicating these or disseminating their findings. And, there are other markers of the systemic advantages science holds over other fields.

On a very practical, pecuniary level, funding for research is rarely on a playing field. In New Zealand, for example, the government’s Research Degree Completion Funding allocates funds to departments upon students’ successfully completing their thesis. This scheme grants twice as much to the sciences as it does to the social sciences, humanities, and law (Commission, 2016).

In practice, this means a biology department supervising a PhD thesis on citizen science in conservation would, on thesis completion, receive twice the fund that a sociology department supervising the very same thesis would receive. And this simply because one field delivers knowledge under the tag of science, while the other under the banner of the humanities.

At a political level the dominance of scientific knowledge is also evident. While most countries have a Science Advisor to the President or Chief Science Advisor to the Prime Minister, there are no equivalent “Chief Humanities Advisor”. And the list of discrepancies goes on, with institutionalised science communication a key player. Of course, for each of these examples of where science and scientific knowledge benefits over other fields, some argument could be made for why this or that case does indeed require that science be treated differently.

But this is exactly why the credibility excess science benefits from is epistemically unjust; because it’s not simply ‘a case here to be explained’ and ‘a case there to be explained’. It’s systemic and carries across context. And science communication, by being the only institutionalised communication of a specific knowledge field, maintains, amplifies, and reinforces this epistemic injustice.

Conclusion

When I argued that science communication was epistemically unjust, my claim was directed at institutionalised science communication, with all its trimmings. I’m grateful to Matheson and Chock for inviting to re-read my original paper and see where I may have failed to be clear, and to think more deeply about what motivated my thinking.

I want to close on one last point Matheson and Chock brought up. They claimed that it would be unreasonable to expect science communicators to communicate other fields. This was partially in response to my original paper where I did suggest that we should move beyond science communication to something like ‘knowledge communication’ (though I’m not sure exactly what that term should be, and I’m not convince ‘knowledge communication’ is ideal either).

Here, I agree with Matheson and Chock that it would be silly to expect those with expertise in science to be obliged to communicate more broadly about fields beyond their expertise (though some of them do). The obvious answer might be to have multiple branches of communication institutionalised and equally supported by government funding, by advisors, etc: science communication; humanities communication; arts communication; etc. And I did consider this in the original paper.

But the stumbling block is scarce resources, both financially and epistemically. Financially, there is a limit to how much governments would be willing to fund for such activates, so having multiple branches of communication would become a deeply political ‘pot-splitting’ issue, and there, the level of injustice might be even more explicit. Epistemically, there is only so much knowledge that we, humans, can process. Simply multiplying the communication of knowledge for the sake of justice (or whatever it is that ‘science communication’ aims to communicate) may not, in the end, be particularly useful without some concerted and coordinate view as to what the purpose of all this communication was.

In light of this, there is an important question for us in social epistemology: as a society funding and participating in knowledge-distribution, which knowledge should we focus our ‘public-making’ and communication efforts on, and why? Institutionalised science communication initiatives assume that scientific knowledge should hold a special, privileged place in public communication. Perhaps this is right, but not simply on the grounds that “science is more reliable”. There needs to be a better reason. Without one, it’s simply unjust.

Contact details: fabien.medvecky@otago.ac.nz

References

Commission, T. T. E. (2016). Performance-Based Research Fund (PBRF) User Manual. Wellington, New Zealand: Tertiary Education Commission.

DIISRTE. (2009). Inspiring Australia: A national strategy for engagement with the sciences.  Canberra: Commonwealth of Australia.

DIUS. (2008). A vision for Science and Society: A consultation on developing a new strategy for the UK: Department for Innovation, Universities, and Skills London.

Matheson, J., & Chock, V. J. (2019). Science Communication and Epistemic Injustice. SERRC, 8(1).

MBIE. (2014). A Nation of Curious Minds: A national strategic plan for science in society.  Wellington: New Zealand Government.

Medvecky, F. (2017). Fairness in Knowing: Science Communication and Epistemic Justice. Science and engineering ethics. doi: 10.1007/s11948-017-9977-0

Nowotny, H. (2003). Democratising expertise and socially robust knowledge. Science and Public Policy, 30(3), 151-156. doi: 10.3152/147154303781780461

Rudolph, J. L. (2014). Why Understanding Science Matters:The IES Research Guidelines as a Case in Point. Educational Researcher, 43(1), 15-18. doi: 10.3102/0013189×13520292

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Jonathan Matheson & Valerie Joly Chock, University of North Florida, jonathan.matheson@gmail.com.

Matheson, Jonathan; Valerie Joly Chock. “Knowledge and Entailment: A Review of Jessica Brown’s Fallibilism: Evidence and Knowledge.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 55-58.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42k

Photo by JBColorado via Flickr / Creative Commons

 

Jessica Brown’s Fallibilism is an exemplary piece of analytic philosophy. In it, Brown engages a number of significant debates in contemporary epistemology with the aim of making a case for fallibilism about knowledge. The book is divided into two halves. In the first half (ch. 1-4), Brown raises a number of challenges to infallibilism. In the second half (ch. 5-8), Brown responds to challenges to fallibilism. Brown’s overall argument is that since fallibilism is more intuitively plausible than infallibilism, and since it fares no worse in terms of responding to the main objections, we should endorse fallibilism.

What Is Fallibilism?

In the introductory chapter, Brown distinguishes between fallibilism and infallibilism. According to her, infallibilism is the claim that one knows that p only if one’s evidence entails p, whereas fallibilism denies this. Brown settles on this definition after having examined some motivation and objections to other plausible definitions of infallibilism. With these definitions in hand, the chapter turns to examine some motivation for fallibilism and infallibilism.

Brown then argues that infallibilists face a trilemma: skepticism, shifty views of knowledge, or generous accounts of knowledge. Put differently, infallibilists must either reject that we know a great deal of what we think we know (since our evidence rarely seems to entail what we take ourselves to know), embrace a view about knowledge where the standards for knowledge, or knowledge ascriptions, vary with context, or include states of the world as part of our evidence. Brown notes that her focus is on non-skeptical infallibilist accounts, and explains why she restricts her attention in the remainder of the book to infallibilist views with generous conception of evidence.

In chapter 2, Brown lays the groundwork for her argument against infallibilism by demonstrating some commitments of non-skeptical infallibilists. In order to avoid skepticism, infallibilists must show that we have evidence that entails what we know. In order to do so, they must commit to certain claims regarding the nature of evidence and evidential support.

Brown argues that non-factive accounts of evidence are not suitable for defending infallibilism, and that infallibilists must embrace an externalist, factive account of evidence on which knowing that p is sufficient for p to be part of one’s evidence. That is, infallibilists need to endorse Factivity (p is evidence only if p is true) and the Sufficiency of knowledge for evidence (if one knows that p, then p is part of one’s evidence).

However, Brown argues, this is insufficient for infallibilists to avoid skepticism in cases of knowledge by testimony, inference to the best explanation, and enumerative induction. In addition, infallibilists are committed to the claim that if one knows p, then p is part of one’s evidence for p (the Sufficiency of knowledge for self-support thesis).

Sufficiency of Knowledge to Support Itself

Chapter 3 examines the Sufficiency of knowledge for self-support in more detail. Brown begins by examining how the infallibilist may motivate this thesis by appealing to a probabilistic account of evidential support. If probability raisers are evidence, then there is some reason to think that every proposition is evidence for itself.

The main problem for the thesis surrounds the infelicity of citing p as evidence for p. In the bulk of the chapter, Brown examines how the infallibilist may account for this infelicity by appealing to pragmatic explanations, conversational norms, or an error theory. Finding each of these explanations insufficient to explain the infelicity here, Brown concludes that the infallibilist’s commitment to the Sufficiency of knowledge for self-support thesis is indeed problematic.

Brown takes on the infallibilists’ conception of evidence in Chapter 4. As mentioned above, the infallibilist is committed to a factive account of evidence, where knowledge suffices for evidence. The central problem here is that such an account has it that intuitively equally justified agents (one in a good case and one in a bad case) are not in fact equally justified.

Brown then examines the ‘excuse maneuver’, which claims that the subject in the bad case is unjustified yet blameless in their belief, and the original intuition confuses these assessments. The excuse maneuver relies on the claim that knowledge is the norm of belief. Brown argues that the knowledge norm fails to provide comparative evaluations of epistemic positions where subjects are intuitively more or less justified, and fails to give an adequate account of propositional justification when the target proposition is not believed. In addition, Brown argues that extant accounts of what would provide the subject in the bad case with an excuse are all insufficient.

In Chapter 5 the book turns to defending fallibilism. The first challenge to fallibilism that Brown examines concerns closure. Fallibilism presents a threat to multi-premise closure since one could meet the threshold for knowledge regarding each individual premise, yet fail to meet it regarding the conclusion. Brown argues that giving up on closure is no cost to fallibilists since closure ought to be rejected on independent grounds having to do with defeat.

A subject can know the premises and deduce the conclusion from them, yet have a defeater (undercutting or rebutting) that prevents the subject from knowing the conclusion. Brown then defends such defeat counterexamples to closure from a number of recent objections to the very notion of defeat.

Chapter 6 focuses on undermining defeat and recent challenges that come to it from ‘level-splitting’ views. According to level-splitting views, rational akrasia is possible—i.e., it is possible to be rational in simultaneously believing both p and that your evidence does not support p. Brown argues that level-splitting views face problems when applied to theoretical and practical reasoning. She then examines and rejects attempts to respond to these objections to level-splitting views.

Brown considers objections to fallibilism from practical reasoning and the infelicity of concessive knowledge attributions in Chapter 7. She argues that these challenges are not limited to fallibilism but that they also present a problem for infallibilism. In particular, Brown examines how (fallibilist or infallibilist) non-skeptical views have difficulty accommodating the knowledge norm for practical reasoning (KNPR) in high-stakes cases.

She considers two possible responses: to reject KNPR or to maintain KNPR by means of explain-away maneuvers. Brown claims that one’s response is related to the notion of probability one takes as relevant to practical reasoning. According to her, fallibilists and infallibilists tend to respond differently to the challenge from practical reasoning because they adopt different views of probability.

However, Brown argues, both responses to the challenge are in principle available to each because it is compatible with their positions to adopt the alternative view of probability. Thus, Brown concludes that practical reasoning and concessive knowledge attributions do not provide reasons to prefer infallibilism over fallibilism, or vice versa.

Keen Focus, Insightful Eyes

Fallibilism is an exemplary piece of analytic philosophy. Brown is characteristically clear and accessible throughout. This book will be very much enjoyed by anyone interested in epistemology. Brown makes significant contributions to contemporary debates, making this a must read for anyone engaged in these epistemological issues. It is difficult to find much to resist in this book.

The arguments do not overstep and the central thesis is both narrow and modest. It’s worth emphasizing here that Brown does not argue that fallibilism is preferable to infallibilism tout court, but only that it is preferable to a very particular kind of infallibilism: non-skeptical, non-shifty infallibilism.  So, while the arguments are quite strong, the target is more narrow.

One of the central arguments against fallibilism that Brown considers concerns closure. While she distinguishes multi-premise closure from single-premise closure, the problems for fallibilism concern only the former, which she formulates as follows:

Necessarily, if S knows p1-n, competently deduces, and thereby comes to believe q, while retaining her knowledge of p1-n throughout, then S knows q. (101)

The fallibilist threshold condition is that knowledge that p requires that the probability of p on one’s evidence be greater than some threshold less than 1. This threshold condition generates counterexamples to multiple-premise closure in which S fails to know a proposition entailed by other propositions she knows. Where S’s evidence for each premise gives them a probability that meets the threshold, S knows each of the premises.

If together these premises entail q, then S knows premises p1-n that jointly entail conclusion q. The problem is that S knowing the premises in this way is compatible with the probability of the conclusion on S’s evidence not meeting the threshold. Thus, this presents possibility for counterexamples to closure and a problem for fallibilism.

As the argument goes, fallibilists must deny closure and this is a significant cost. Brown’s reply is to soften the consequence of denying closure by arguing that it is implausible due to alternative (and independent) reasons concerning defeat. Brown’s idea is that closure gives no reason to reject fallibilism, or favor infallibilism, given that defeat rules out closure in a way that is independent of the fallibilism-infallibilism debate.

After laying out her response, Brown moves on to consider and reply to objections concerning the legitimacy of defeat itself. She ultimately focuses on defending defeat against such objections and ignores other responses that may be available to fallibilists when dealing with this problem. Brown, though, is perhaps a little too quick to give up on closure.

Consider the following alternative framing of closure:

If S knows [p and p entails q] and believes q as the result of a competent deduction from that knowledge, then S knows q.

So understood, when there are multiple premises, closure only applies when the subject knows the conjunction of the premises and that the premises entail the conclusion. Framing closure in this way avoids the threshold problem (since the conjunction must be known). If S knows the conjunction and believes q (as the result of competent deduction), then S’s belief that q cannot be false. This is the case because the truth of p entailing q, coupled with the truth of p itself, guarantees that q is true. This framing of closure, then, eliminates the considered counterexamples.

Framing closure in this way not only avoids the threshold problem, but plausibly avoids the defeat problem as well. Regarding undercutting defeat, it is at least much harder to see how S can know that p entails q while possessing such a defeater. Regarding rebutting defeat, it is implausible that S would retain knowledge of the conjunction if S possesses a rebutting defeater.

However, none of this is a real problem for Brown’s argument. It simply seems that she has ignored some possible lines of response open to the fallibilist that allows the fallibilist to keep some principle in the neighborhood of closure, which is an intuitive advantage.

Contact details: jonathan.matheson@gmail.com

References

Brown, Jessica. Fallibilism: Evidence and Knowledge. Oxford: Oxford University Press, 2018.

Author Information: Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “Conscientiousness and Other Problems: A Reply to Zagzabski.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 10-13.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Sr

Please refer to:

We’d first like to thank Dr. Zagzebski for engaging with our review of Epistemic Authority. We want to extend the dialogue by offering brief comments on several issues that she raised.

Conscientiousness

In our review we brought up the case of a grieving father who simply could not believe that his son had died despite conclusive evidence to the contrary. This case struck us as a problem case for Zagzebki’s account of rationality. For Zagzebski, rationality is a matter of conscientiousness, and conscientiousness is a matter of using your faculties as best you can to get to truth, where the best guide for a belief’s truth is its surviving conscientious reflection. The problem raised by the grieving father is that his belief that his son is still alive will continuously survive his conscientious reflection (since he is psychologically incapable of believing otherwise) yet it is clearly an irrational belief. In her response, Zagzebski makes the following claims,

(A) “To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.” (57)

and,

(B) “a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p.” (57)

These claims about the case lead to a dilemma. Either conscientiousness is to be understood subjectively or objectively, and either way we see some issues. First, if we understand conscientiousness subjectively, then the father seems to pass the test. We can suppose that he is doing the best he can to believe truths, but the psychological stability of this one belief causes the dissonance to be resolved in atypical ways. So, on a subjective construal of conscientiousness, he is conscientious and his belief about his son has survived conscientious reflection.

We can stipulate that the father is doing the best he can with what he has, yet his belief is irrational. Zagzebski’s (B) above seems to fit a subjective understanding of conscientiousness and leads to such a verdict. This is also how we read her in Epistemic Authority more generally. Second, if we understand conscientiousness objectively, then it follows that the father is not being conscientious. There are objectively better ways to resolve his psychic dissonance even if they are not psychologically open to him.

So, the objective understanding of conscientiousness does not give the verdict that the grieving father is rational. Zagzebski’s (A) above fits with an objective understanding of conscientiousness. The problem with the objective understanding of conscientiousness is that it is much harder to get a grasp on what it is. Doing the best you can with what you have, has a clear meaning on the subjective level and gives a nice responsibilist account of conscientiousness. However, when we abstract away from the subject’s best efforts and the subject’s faculties, how should we understand conscientiousness? Is it to believe in accordance with what an ideal epistemic agent would conscientiously believe?

To us, while the objective understanding of conscientiousness avoids the problem, it comes with new problems, chief among which is a fleshed out concept of conscientiousness, so understood. In addition, the objective construal of conscientiousness also does not appear to be suited for how Zagzebski deploys the concept in other areas of the book. For instance, regarding her treatment of peer disagreement, Zagzebski claims that each party should resolve the dissonance in a way that favors what they trust most when thinking conscientiously about the matter. The conscientiousness in play here sounds quite subjective, since rational resolution is simply a matter of sticking with what one trusts the most (even if an ideal rational agent wouldn’t be placing their trust in the same states and even when presented evidence to the contrary).

Reasons

Zagzebski distinguishes between 1st and 3rd person reasons, in part, to include things like emotions as reasons. For Zagzebski,

“1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category)” (57)

We are troubled by the way that Zagzebski employs this distinction. First, it is not clear how these two kinds of reasons are related. Does a subject have a 1st person reason for every 3rd person reason? After all, not every proposition that is logically or probabilistically connected to the truth of a proposition is part of an individuals evidence or is one of their reasons. So, are the 3rd person reasons that one possesses reasons that one has access to by way of a first-person reason? How could a 3rd person reason be a reason that I have if not by way of some subjective connection?

The relation between these two kinds of reasons deserves further development since Zagzebski puts this distinction to a great deal of work in the book. The second issue results from Zagzebski’s claim that, “1st person and 3rd person reasons do not aggregate.” (57)  If 1st and 3rd person reasons do not aggregate, then they do not combine to give a verdict as to what one has all-things-considered reason to believe. This poses a significant problem in cases where one’s 1st and 3rd person reasons point in different directions.

Zagzebski’s focus is on one’s 1st person reasons, but what then of one’s 3rd person reasons? 3rd person reasons are still reasons, yet if they do not aggregate with 1st person reasons, and 1st person reasons are determining what one should believe, it’s hard to see what work is left for 3rd person reasons. This is quite striking since these are the very reasons epistemologists have focused on for centuries.

Zagzebski’s embrace of 1st person reasons is ostensibly a movement to integrate the concepts of rationality and truth with resolutely human faculties (e.g. emotion, belief, and sense-perception) that have largely been ignored by the Western philosophical canon. Her critical attitude toward Western hyper-intellectualism and the rationalist worldview is understandable and, in certain ways, admirable. Perhaps the movement to engage emotion, belief, and sense-perception as epistemic features can be preserved, but only in the broader context of an evidence-centered epistemology. Further research should channel this movement toward an examination of how non-traditional epistemic faculties as 1st person reasons may be mapped to 3rd person reasons in a way is cognizant of self-trust in personal experience —that is, an account of aggregation that is grounded fundamentally in evidence.

Biases

In the final part of her response, Zagzebski claims that the insight regarding prejudice within communities can bolster several of her points. She refers specifically to her argument that epistemic self-trust commits us to epistemic trust in others (and its expansion to communities), as well as her argument about communal epistemic egoism and the Rational Recognition Principle. She emphasizes the importance of communities to regard others as trustworthy and rational, which would lead to the recognition of biases within them—something that would not happen if communities relied on epistemic egoism.

However, biases have staying power beyond egoism. Even those who are interested in widening and deepening their perspective though engaging with others can nevertheless have deep biases that affect how they integrate this information. Although Zagzebski may be right in emphasizing the importance of communities to act in this way, it seems too idealistic to imply that such honest engagement would result in the recognition and correction of biases. While such engagement might highlight important disagreements, Zagzebski’s analysis of disagreement, where it is rational to stick with what you trust most, will far too often be an open invitation to maintain (if not reinforce) one’s own biases and prejudice.

It is also important to note that the worry concerning biases and prejudice cannot be resolved by emphasizing a move to communities given that communities are subject to the same biases and prejudices as individuals that compose them. Individuals, in trusting their own communities, will only reinforce the biases and prejudice of its members. So, this move can make things worse, even if sometimes it can make things better. Zagzebski’s expansion of self-trust to communities and her Rational Recognition Principle commits communities only to recognize others as (prima facie) trustworthy and rational by means of recognizing their own epistemic faculties in those others.

However, doing this does not do much in terms of the disclosure of biases given that communities are not committed to trust the beliefs of those they recognize as rational and trustworthy. Under Zagzebski’s view, it is possible for a community to recognize another as rational and trustworthy, without necessarily trusting their beliefs—all without the need to succumb to communal epistemic egoism. Communities are, then, able to treat disagreement in a way that resolves dissonance for them.

That is, by trusting their beliefs more than those of the other communities. This is so even when recognizing them as rational and trustworthy as themselves because, under Zagzebski’s view communities are justified in maintaining their beliefs over those of others not because of egoistic reasons but because by withstanding conscientious self-reflection, they trust their beliefs more than those of others. Resolving dissonance from disagreement in this way is clearly more detrimental than it is beneficial, especially in the cases of biased individuals and communities, for which this would lead them to keep their biases.

Although, as Zagzebski claims, attention to cases of prejudice within communities may help give more importance to her argument about the extension of self-trust to the communal level, it does not do much in terms of disclosing biases inasmuch as dissonance from disagreement is resolved in the way she proposes. Her proposal leads not to the disclosure of biases as she implies, but to their reinforcement given that biases—although plausibly unaware—is what communities and individuals would trust more in these cases.

Contact details: jonathan.matheson@gmail.com

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 56-59.

Author Information: Linda T. Zagzebski, University of Oklahoma, lzagzebski@ou.edu

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.Social Epistemology Review and Reply Collective 6, no. 10 (2017):56-59.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3MA

Please refer to:

Image credit: Oxford Univerity Press

Many thanks to Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson (2017) for your extensive review of Epistemic Authority (2015). I have never seen a work by four philosophers working together, and I appreciate the collaboration it must have taken for you to produce it. I learned from it and hope that I can be a help to you and the readers of SERRC.

What is Inside and What is Outside

I would like to begin by summarizing the view of the mind I am using, which I hope will clarify the central place of conscientious self-reflection in my book, and the way that connects with reasons. I am using a modern view of the mind in which the mind has a boundary.[1] There is a difference between what is inside and what is outside. The mind has faculties that naturally aim at making our mental states fit the outside world in characteristic ways. Perceptual faculties, epistemic faculties, and emotional faculties all do that. They may do so successfully or they may not. So perceptions can be veridical or non-veridical; beliefs can be true or false; emotions can fit or not fit their intentional objects. This view of the mind leads to a generalization of the problem of epistemic circularity: we have no way of telling that any conscious state fits an external object without referring to other conscious states whose fittingness we can also question—hence, the need for self-trust. But we do have a way to detect that something is wrong with our connection to the outside world; that we have a mistaken perceptual state, a false belief, an inappropriate or exaggerated emotion, a skewed value, etc, by the experience of dissonance among our states.

For instance, a belief state might clash with a memory or a perceptual state or a belief acquired by testimony, or the cognitive component of an emotional state. Some dissonance is resolved immediately and without reflection, as when I give up my seeming memory of turning off the sprinkler system when I hear the sprinklers come on, but often dissonance cannot be resolved without awareness of the conflicting states and reflection upon them. Since the mind cannot get beyond its own boundary, all we can do is (a) trust that our faculties are generally reliable in the way they connect us to the outside world, and (b) attempt to use them the best way we can to reach their objects. That is what I call “conscientiousness.” I define epistemic conscientiousness as using our faculties in the best way we can to get the truth (48). Ultimately, our only test that any conscious state fits its object is that it survives conscientious reflection upon our total set of conscious states, now and in the future.

The authors raise the objection that my account is not sufficiently truth- centered because there is more than one way to resolve dissonance. That is, of course, true. The issue for a particular person is finding the most conscientious way to resolve the conflict, a question that sometimes has a plain answer and sometimes does not. The authors give the example of a father who cannot bring himself to believe that his son was killed in war even though he has been given a substantial body of evidence of his son’s death. It is possible for the man to restore harmony in his psyche by abandoning any states that conflict with his belief that his son is alive. Why do we think it is not rational for him to do that? Because we are told that his own faculties are giving him overwhelming evidence that his son is dead, and presumably his faculties will continue to do so forever. His son will never return. If he is to continue believing his son is alive, he has to continuously deny what he is told by sources he has always trusted, which means he has to continuously fabricate reasons why the sources are no longer trustworthy and are compounding their mistakes, and why new sources are also mistaken. If some of his reasons are sensory, he may even have to deny the evidence of his senses. That means that he is not epistemically conscientious as I have defined it because he is not trying to make his belief about his son true. Instead, he is trying to maintain the belief come what may. But we are told that it is psychologically impossible for him to recognize that his son has died. If that is true, then it is psychologically impossible for him to be epistemically conscientious, and hence rational. I would not deny that such a thing can happen, but in that case there is nothing more to be said.

The Nature of Reasons

This leads to my view on the nature of reasons. Why do we say that the father has many reasons to believe his son is dead, in fact, so many that if he is rational, he will give up the belief that his son still lives? We say that because we know what conscientious people do when given detailed and repeated testimony by sources whose trustworthiness has survived all of their past conscientious reflection and with no contrary evidence. To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.

Self-trust is more basic than reasons because we do not have any reason to think that what we call reasons do in fact indicate the truth without self-trust. (Chap 2, sec.5). Self-trust is a condition for what we call a reason to be in fact an indicator of truth. That means that contrary to what the authors maintain, a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p. There can, however, be cases in which it is not clear which way the balance of reasons go, and I discuss some of those cases in Chapter 10 on disagreement. Particularly difficult to judge are the cases in which some of the reasons are emotions.

The fact that emotions can be reasons brings up the distinction between 1st person and 3rd person reasons, which I introduce in Chapter 3, and discuss again in chapters 5, 6, and 10. (The authors do not mention this distinction). What I call 1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category). 3rd person reasons can be laid out on the table for anybody to consider. I say that 1st person and 3rd person reasons do not aggregate. They cannot be put together to give a verdict on the balance of reasons in a particular case independent of the way they are treated by the person who is conscientiously reflecting. The distinction between the two kinds of reasons is important for more than one purpose in the book. I use the distinction to show that 1st person reasons broaden the range of reasons considerably, including states of emotion, belief, perception, intuition, and memory.

A conscientiously self-reflective person can treat any of these states as indicators of the truth of some proposition. We think that we access 3rd person reasons because of our trust in ourselves when we are conscientious. And we do access 3rd person reasons provided that we are in fact trustworthy. This distinction is important in cases of reasonable disagreement because two parties to a disagreement share some of their 3rd person reasons, but they will never share their 1st person reasons. The fact that each party has certain 1st person reasons is a 3rd person reason, but that fact will never have the same function in deliberation as 1st person reasons, and we would not want it to do so.

The authors raise some questions about the way we treat our reasons when they are pre-empted by the belief or testimony of an authority. What happens to the reasons that are pre-empted? Using pre-emption in Raz’s sense, I say that they do not disappear and they are not ignored. They continue to be reasons for many beliefs.  Pre-emption applies to the limited scope of the authority’s authority. When I judge that A is more likely to get the truth whether p than I am, then A’s testimony whether p replaces my independent reasons for and against p. But my reasons for and against p are still beliefs, and they operate as reasons for many beliefs outside the scope of cases in which I judge that A is an authority. Pre-emption also does not assume that I control whether or not I pre-empt. It is rational to pre-empt when I reasonably judge that A satisfies the justification thesis. If I am unable to pre-empt, then I am unable to be rational. In general, I think that we have quite a bit of control over the cases in which we pre-empt, but the theory does not require it. As I said about the case of the father whose son died in a war, I do not assume that we can always be rational.[2]

On Our Biases

The authors also bring up the interesting problem of biases in ourselves or in our communities. A prejudiced person often does not notice her prejudices even when she is reflecting as carefully as she can, and her trust in her community can make the situation worse since the community can easily support her prejudices and might even be the source of them. This is an important insight, and I think it can bolster several points I make in the book. For one thing, cases of bias or prejudice make it all the more important that we have trust in others whose experience widens and deepens our own and helps us to identify unrecognized false beliefs and distorted feelings, and it makes particularly vivid the connection between emotion and belief and the way critical reflection on our emotions can change beliefs for the better.

My argument in Chapter 3 that epistemic self-trust commits us to epistemic trust in others, and the parallel argument in Chapter 4 that emotional self-trust commits us to emotional trust in others would be improved by attention to these cases. The problem of prejudice in communities can also support my argument in Chapter 10, section 4 that what I call communal epistemic egoism is false. I argue that communities are rationally required to think of other communities the same way individuals are rationally required to think of other individuals. Just as self-trust commits me to trust in others, communal self-trust commits a community to trust in other communities. Since biases are most commonly revealed by responses outside the community, it is a serious problem if communities succumb to communal egoism.

In the last section of Chapter 10 I propose some principles of rationality that are intended to show some consequences of the falsehood of communal egoism. One is the Rational Recognition Principle: If a community’s belief is rational, its rationality is recognizable, in principle, by rational persons in other communities. Once we admit that rationality is a quality we have as human beings, not as members of a particular community, we are forced to recognize that the way we are seen from the outside is prima facie trustworthy, and although we may conscientiously reject it, we need reasons to do so. It is our own conscientiousness that requires us to reflect on ourselves with external eyes. A very wide range of trust in others is entailed by self-trust. That is one of the main theses of the book.

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

[1] It is not mandatory to think of the mind this way, although it is the most common view in the modern period. I am working on the difference between this approach and the more open view of the mind that dominated before the modern era in my project, The Two Greatest Ideas, Soochow Lectures, 2018.

[2] Christoph Jaeger offers extended objections to my view of pre-emption and I reply in Episteme, April 2016. That issue also includes an interesting paper by Elizabeth Fricker on my book and my reply. See European Journal for Philosophy of Religion, Dec. 2014, which contains twelve papers on Epistemic Authority and my replies, including several that give special attention to pre-emption.

Author Information: Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3J3

Image credit: Oxford Univerity Press

Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief
Linda Zagzebski
Oxford Univerity Press (reprint 2015)
296 pp.

Like with her celebrated Virtues of the Mind, Linda Zagzebski again examines the application of concepts familiar in a different normative domain to the epistemic domain. In this case, the connection is with social and political philosophy and with the concepts of authority and autonomy in particular. The book covers a broad range of contemporary epistemological topics, attempting to gain insights from those is social and political philosophy. In what follows we will briefly summarize the book and raise several points of criticism.

Analyzing the Chapters

Zagzebski makes her own position of the book clear from the outset—that subjects should indeed take beliefs on the authority of others, and in fact must do so to act rationally. However, before this argument is given, she insists that the reader understand why there is such a “strong proclivity” to denying this argument (6). In Chapter 1, Zagzebski follows the historical progression of thought that led to this cultural pattern, arguing that it has led to our modern societies to have a strong emphasis on autonomy and egalitarianism, ultimately diminishing the value of authority outside of oneself.

In chapter 2, Zagzebski develops her account of trust. She defines “trust” as a combination of epistemic, affective, and behavioral components that lead us to believe that our epistemic faculties will get us to the truth, feel trusting towards them in that respect, and treat them respectively (37-8). She argues that this trust is rational upon reflection, relying on her understanding of what it means to be rational, “to do a better job of what we do in any case—what our faculties do naturally” (30). According to her, we naturally try to resolve dissonance, where dissonance equates to internal conflict between a person’s mental states. She concludes that epistemic self-trust is the most rational response to dissonance, including the one produced upon discovery of epistemic circularity: the problem that one has no way of telling whether one’s epistemic faculties are reliably accurate without depending on those same faculties.

Zagzebski moves toward the substance of her argument in her third chapter. She argues that considering how one’s faculties are bound up with both the desire for truth and the belief that they can access the truth, commits one to trusting the faculties of others. This leads into Zagzebski’s principle of “epistemic universalism,” which asserts that another person having some belief itself is a prima facie reason to believe it, given that the other person’s epistemic faculties are in order and that they are epistemically conscientious.

Zagzeski expands the circle of trust to include emotions in Chapter 4. She argues that we have the need to trust in our emotional dispositions, in particular the emotion of admiration, that will then give us another foundational reason for epistemic trust in others (75). In regards to our natural emotion dispositions she says that “we need basic trust in the tendency of our emotion dispositions to produce fitting emotions for the same reason we need basic trust in the tendency of our epistemic faculties to produce true beliefs” (83). It is from this emotion of admiration that we can then conscientiously trust in other epistemic exemplars.

In chapter 5, Zagzebski argues that authority in the epistemic realm is justified. Based on Joseph Raz’s account of political authority, she defines authority as a “normative power that generates reasons for others to do or believe something preemptively” (102). Here a preemptive reason is one that replaces other reasons the subject has and is not simply added to them. Zagzebski proposes an epistemic analogue of Raz’s Preemption Thesis, which states that the fact that an authority has a belief p is a preemptive reason for me to believe p (107). She also formulates epistemic analogues for Raz’s Normal Justification Thesis in order to justify taking a belief on epistemic authority. Zagzebski proposes that the authority of another person’s belief is justified for me when I conscientiously judge that I am more likely to form a true belief and avoid a false belief, or that I am more likely to form a belief that survives my conscientious self-reflection, if I believe what the authority believes than if I try to figure out what to believe myself (110-1).

In the sixth chapter, Zagzebski focuses on the concept of testimony as it relates to epistemic authority, advocating for a trust-model of testimony. On her account, testimony is a contractual “telling” which occurs between a teller and hearer, in which both sides have responsibilities. The teller implicitly requests the hearer’s trust and assumes the associated responsibility. The hearer also has expectations of the teller, especially when a future action is carried out according to the content of the teller’s testimony. Because of this contractual nature, the standard of conscientiousness is higher in testimony than in the general formation of a belief. The authority of testimony is justified both by the fact that believing the testimony will more likely get the truth than self-reliance, as well as the fact that beliefs obtained through testimony are more likely to survive self-reflection than those formed through self-reliance.

Zagzebski turns her attention to epistemic communities in Chapter 7. She argues that epistemic authority in communities can be justified by one’s conscientious judgment that one is more likely to believe the truth, or to get a belief that will survive one’s self-reflection if one believes what “We” (the community) believe rather than if one tries to figure out what to believe by oneself in a way that is independent of “Us.” Here communities are seen as an extended self. Zagzebski would argue that communally acquired beliefs are more likely to survive communal reflection, which follows from her “extended self” argument. Thus, as long as one accepts one’s community as an extended self, one can in this way acquire reasons to believe on the authority of one’s community.

In chapter 8, Zagzebski examines moral epistemic authority and its limitations. Zagzebski sees no reason to deny that there are epistemic exemplars in the moral domain, considering the rejection of moral truth and egalitarianism as possible reasons for rejecting moral authority. She argues that testimony is not an adequate model for most moral learning because of two limitations: (1) testimony lacks motivational force and (2) it does not offer understanding. According to her, the way in which one can get a moral belief from another person has to do with the emotion that grounds such moral judgment. She claims that testimony is able to convey conceptual judgment and relevant similarities to persons or situations that elicit emotional response, but this is not sufficient to produce the emotional response itself (172). It follows then, she argues, that “I do not take a belief on authority; I take an emotion on authority, and the emotion is the ground for my moral belief” (174). The argument gets extended in the following chapter to religious authorities. Applying her earlier argument to this context, she defends the claim that individuals often conscientiously judge that if they believe in accordance with their religious community they will do better, and so often individuals are justified in deferring to their religious community.

In Chapter 10, Zagzebski turns to the contemporary debate concerning peer disagreement. As she diagnoses the debate, it is primarily a conflict between the competing values of egalitarianism and self-reliance. Zagzebski sees steadfast views of disagreement overvaluing self-reliance and stronger conciliatory views overvaluing egalitarianism, and finds both mistaken. Her own take on the debate is to construe peer disagreement as a conflict within self-trust, where one finds dissonance amongst the things that she trusts (her opinion, her peer’s opinion, etc.). Given this, and her preceding argument, Zagzebski’s recommendation is to resolve the dissonance in a way that favors what one trusts the most when thinking conscientiously about the matter. There is thus no universal response to disagreement. How any given disagreement is to be handled will depend upon the particular details of the case, in particular, which psychic states the subject trusts the most. For instance, one’s trust in a particular belief may be stronger than one’s trust in what appears to be evidence to the contrary, in which case it would be rational to resolve the dissonance while maintaining one’s belief.

In the final chapter of Epistemic Authority, the author primarily seeks to elucidate her notion of autonomy, ultimately to defend the claim that autonomy is not compromised by her model of epistemic authority. Autonomy is the primary property and function of Zagzebski’s “executive self,” which seeks to eliminate psychic dissonance through self-reflection. Zagzebski claims that conscientious judgment and self-reflection are the most reliable ways of avoiding epistemic dissonance —that being conscientious is the best one can do. She maintains that we should trust in the connection between rationality (as manifest in the act of conscientious self-reflection) and actually being right, because self-reflection is the only way we can assess if our beliefs have survived (which in turn is the only way we can get the truth).

Assessing Epistemic Authority

We turn now to a critical assessment of the book.

One general concern is with Zagzebski’s account of rationality and epistemic justification, which is central to her overall argument. She claims that, “rationality is a property we have when we do what we do naturally, only we do a better job of it” (30), and of central importance here is our natural desire to achieve a harmonious self. (31) Dissonance amongst our psychic states (beliefs, desires, emotions, etc.) is thus to be avoided, and a conscientious judgment about what states will harmoniously survive our self-reflection is what justifies those states. A problem for this account is that it is not sufficiently truth connected.

Zagzebski attempts to adequately connect her account to truth through the achievement of psychic harmony. She claims that, “the ultimate test of whether my faculties have succeeded in fitting their objects is that they fit each other.” (230) Such a coherentist account, however, is fraught with well-known problems. There are many ways of having harmonious states that are nothing close to truth conducive. The problem comes from the fact that harmony can be achieved in more than one way. In fact, any state can be protected so long as one is able to make accommodations elsewhere. Zagzebski recognizes this fact, and claims that some ways of resolving dissonance are better than others, but these preferential ways are simply those that one conscientiously judges to not create future dissonance. Such an account simply doubles down on trusting harmony and can be seen to give the wrong verdicts.

For instance, consider a father whose son is away at war. Suppose that the father then is given a substantial body of information that his son has been killed. However, the father simply cannot come to believe that his son has died. It is psychologically impossible for him, and he recognizes this fact. In terms of planning his psychic future then the belief that his son is alive will clearly be part of the picture. He can be certain that this state will survive his reflection (even his conscientious reflection) since he recognizes it to be psychologically immovable. Thus, his only paths to harmony are to distrust and abandon all states in conflict with that belief. It is apparent, however, that such a course of action is not to be recommended, and the remaining belief that his son is well is not justified for him. Sometimes, doing one’s best is not good enough. This holds in epistemology as well. While the father ought not be faulted for his belief, it is not justified for him.

A related issue concerns the role of reasons on Zagzebski’s account. From the outset, Zagzebski’s account centers around trust. The motivation for this seems to be that there is no non-circular defense of the reliability of one’s faculties. However, it is not clear what Zagzebski makes of such epistemic circularity. It might be thought that it is implied to be defective, but if so, it would be nice to hear more about the problem since many epistemologists have defended some kind of circularity. Adding to the confusion, however, is Zagzebski’s claim that she, and others, have “strong circular reasons to trust her epistemic faculties” (93). If such circular justification is possible, then the motivation for the role of trust is diminished. In addition, a large portion of the book is dedicated to arguments that individuals have various kinds of prima facie reasons (i.e. to believe what others believe, to trust others as I trust myself, to trust those who are conscientious).

While the arguments for these principles are quite plausible, there are several reasons to be unsatisfied. First, missing from the account is anything about the strength of these reasons or what kind of considerations would defeat these reasons. Without this further information, it is unclear what to make of these reasons and how they affect our overall outlook. Second, it is difficult to see what role these reasons can play in Zagzebski’s overall account of rationality and justification. Since, for her, rationality and justification are a matter of one’s conscientious judgments, the role of reasons seems to drop out entirely.

One’s reasons may influence their conscientious judgments, but they needn’t, and when one’s conscientious judgments go against their reasons, on Zagzebski’s view they ought to go with their judgment. For instance, in applying her account to the epistemic significance of disagreement, Zagzebski’s proposal is to resolve the dissonance resulting from discovered disagreement in accordance with what one conscientiously accords the most trust. However, on her account, significant errors regarding what one conscientiously trusts have no role to play in terms of what the subject is justified in believing. Many will see this as a significant cost since misplaced trust is not without epistemic consequences. A final concern with Zagzebski’s account of reasons concerns her preemption thesis.

Zagzebksi claims that, “the fact that the authority has a belief p is a reason for me to believe p that replaces my other reasons relevant to believing p and is not simply added to them” (107). This thesis raises some questions (i.e. where do those reasons go and can they ever return?) as well as some problems. One problem concerns ability. It is unclear how one would be able to comply with this principle and replace their current reasons. A deeper problem, however, concerns the consequences of compliance. If one looses their own reasons on an issue, they could lose information critical to both the future evaluation of the putative authority and the relevant claim. This seems to allow for a dangerous way for a putative authority to maintain its authority because the other reasons in the domain have been replaced and are no longer relevant.

Zagzebski also fails to consider cases in which an epistemic authority abuses his/her authoritative status. For instance, a noticeable gap in the book is the lack of attention paid to the problem of epistemic injustice. Perhaps even more worrisome is that Zagzebski’s account appears to actually exacerbate the problem of epistemic injustice. Prejudices can be, and often are, unintended. That is to say that a prejudiced person is likely unable to recognize his/her own prejudices. Further, biases are sticky—they don’t change easily.

Given all of this, it appears that the best way to avoid future dissonance is by adjusting the states that conflict with the biases. While such and accommodation of biases might be the most effective route to harmony, it is surely not the rational course of action. When biases survive reflection, the subject’s conscientious judgment is informed by prejudices that are both unfair and unfounded. Thus, Zagzebski’s account can be both epistemically and morally defective. Epistemically, because the hearer would miss out on a truth that, according to Zagzebski, he/she is naturally interested in acquiring (33), and morally, because an epistemic injustice could be inflicted on a person/community as a result. The apparent rational survival of biases affects our ability to accurately trust others and recognize epistemic authorities.

This problem only seems to get worse when applied to epistemic communities. Consider intergroup bias and groupthink—a community is very likely to acquire and entrench beliefs that confirm the community’s group identity, while simultaneously believing that it is thinking conscientiously. The epistemic opacity which was concerning at the individual level is only aggravated at the community level.

For Zagzebski, the community itself was formed out of chains of individual conscientious judgments, meaning that both individual and group distortions are compounded upon one another in any given community. If the gender bias survives a community’s reflection, then, under Zagzebski’s account, the community could be justified in trusting the belief that a female scientist is distrustful even when there is evidence against such belief and/or against the bias itself. This would lead to community reinforcement and distancing from others given that the community would trust the way in which they acquire beliefs (which includes trusting the bias even when they fail to recognize it) and distrust those communities that acquire beliefs in a way they don’t trust (without the bias). This appears to be highly problematic.

Zagzebski’s Epistemic Authority will no doubt play a role in shaping a number of the contemporary epistemological debates. Her connections drawn to political philosophy provide a novel way of viewing a number of epistemological problems. While we find a number of problems with Zagzebki’s final account, Epistemic Authority will be of value for anyone interested in engaging in these debates.