Archives For credibility

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Paul Faulkner, University of Sheffield, paul.faulkner@sheffield.ac.uk

Faulkner, Paul. 2012. Trust and the assessment of credibility. Social Epistemology Review and Reply Collective 1 (8): 1-6.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-oN

Please refer to:

Epistemic failings can be ethical failings. This insight is owed to Miranda Fricker who explores this idea in developing a theory of epistemic injustice. [1] A central type of epistemic injustice is testimonial injustice, where there are two components to this. A knower suffers a testimonial injustice when she is not given due credit and is thereby prevented from doing what is fundamental to being a knower, which is inform others of what she knows. This is the first component, which is epistemic: a testimonial injustice starts with a misjudgement of a knower’s credibility; it starts, in Fricker’s terms, with the knower suffering a credibility deficit. The second, ethical, component is the explanation of this credibility deficit. There is a testimonial injustice when the cause of this credibility deficit is not innocent error but some form of prejudice. Here Fricker wants to draw our attention to one pervasive prejudice, which she calls identity prejudice. [2] This is the prejudice that attaches to a person by virtue of their social identity and which thereby tracks that person through the multitude of social activities, economic, political and so on. Thus the paradigm case of testimonial injustice is identity-prejudicial credibility deficit. [3]

The stated objective of Gloria Origgi’s paper “Epistemic Injustice and Epistemic Trust” is:

to broaden her [Fricker’s] analysis in two ways: first, I will argue that the ways in which credibility judgments are biased go far beyond the central case of identity prejudice; and, second, I will try to detail some of the mechanisms that control our ways of making testimonial injustices to the speakers [sic]. [4]

Continue Reading…