Archives For epistemic trust

Author Information: Alfred Moore, University of York, UK,

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink:

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons


In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons


We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details:


Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009,

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (

Author Information: Benjamin W. McCraw, University of South Carolina Upstate,

McCraw, Benjamin W. “Combes on McCraw on the Nature of Epistemic Trust: A Rejoinder.” Social Epistemology Review and Reply Collective 5, no. 8 (2016): 28-31.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Marius Brede, via flickr

My genuine thanks to Richard Combes for continuing his thoughtful analysis of my views on epistemic trust. In this really short reply, let me offer a quick re-rejoinder to a few of his latest comments.

Combes on Trust-In and Trust-That

First, let’s get clear on Combes’ view. He claims that “one epistemically trusts S if and only if one has certain beliefs about S’s thick reliability” (2016, 8) where ‘thick reliability’ refers to the state where “one has consciously tracked S’s past history, judged that S enjoys some perhaps unique expertise, and therefore should depend on s’s testimony…” (8). That is, H trusts S just in case H believes that:

(a) H has tracked S’s history with respect to the accuracy of S’s utterances,
(b) S’s track record is reliable and
(c) H should depend on S’s future assertions.  Continue Reading…

Author Information: Richard Combes, University of South Carolina Upstate,

Combes, Richard. “McCraw on the Nature of Epistemic Trust—Part II.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 7-10.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Arne Halvorsen, via flickr

In my original response to “The Nature of Epistemic Trust,” by Benjamin McCraw (2015), I defended the view that epistemic trust reduces to one’s belief that another’s allegedly successful ability to track the truth in the past underwrites confidence in the latter’s present and future testimony (2015). On the basis of the introspective data, I deny that any irreducibly distinct, non-propositional attitude of epistemic trust supervenes on such a belief. Epistemic trust is not presented to consciousness as an episodic quale. There is nothing that it is like to trust someone other than being convinced that the trustee’s history validates the truster’s continued support in him or her as a beacon of knowledge.  Continue Reading…

Author Information: Benjamin McCraw, University of South Carolina Upstate,

McCraw, Benjamin. “Thinking Through Social Epistemology: A Reply to Combes, Smolkin, and Simmons.” Social Epistemology Review and Reply Collective 5, no. 4 (2016): 1-12.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Steve Simmonds, via flickr

I want to thank Richard Combes, Doran Smolkin, and Aaron Simmons for their gracious, penetrating, and excellent commentaries on my paper. They’ve offered me outstanding points to consider, objections to ponder, and directions to pursue. In what follows, I’ll offer some thoughts of my own and respond to what I think are the truly insightful criticisms they raise for my model of epistemic trust (ET). Let me address Combes first.  Continue Reading…

Author Information: J. Aaron Simmons, Furman University,

Simmons, J. Aaron. “Existence and Epistemic Trust.” Social Epistemology Review and Reply Collective 4, no. 12 (2015): 14-19.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Steve Rotman, via flickr

The history of philosophy repeatedly demonstrates that it is possible to read an author differently, and maybe even better, than she reads herself. For example, in many ways, Edmund Husserl quite sensibly considered his phenomenological project primarily to be a matter of epistemology. Yet, Martin Heidegger goes a long way toward showing the ontological stakes of Husserl’s epistemology such that phenomenology gets radically rethought not by going counter to Husserl, but, as Heidegger (1968) would put it in What is Called Thinking?, by going to Husserl’s encounter.[1] While reading Benjamin W. McCraw’s (2015) excellent essay “The Nature of Epistemic Trust,” I was struck by the way that, like Heidegger’s reading of Husserl, McCraw’s account of epistemic trust (ET) productively opens onto issues far beyond where McCraw himself goes. In this short response to McCraw’s essay, I will look to what I consider to be the existential stakes of McCraw’s proposal regarding epistemic trust. Crucially, I do not take my thoughts here to be a direct critique of McCraw, but instead an attempt to think with him by taking seriously the importance of epistemic trust and its implications for subjectivity and social life more broadly.  Continue Reading…

Author Information: Doran Smolkin, Kwantlen Polytechnic University,

Smolkin, Doran. “Clarifying the Dependence Condition: A Reply to Benjamin McCraw’s, ‘The Nature of Epistemic Trust’.” Social Epistemology Review and Reply Collective 4, no 10 (2015): 10-13.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Pao, via flickr

Much of what we come to believe is based on trusting the communication of others. It would, therefore, be helpful to better understand the nature of this sort of trust. Benjamin McCraw offers one very clear and well-argued account in his, “The Nature of Epistemic Trust.” McCraw claims that a hearer or audience (H) places epistemic trust (ET) in a person or speaker (S) that some proposition (p) is true if and only if:

1. H believes that p;
2. H takes S to communicate that p;
3. H depends upon S’s (perceived) communication for H’s belief that p; and
4. H sees S as epistemically well-placed with respect to p. (McCraw, 13).

Continue Reading…

Author Information: Fabien Medvecky, University of Otago,

Medvecky, Fabien. “Knowing From Others: A Review of Knowledge on Trust and A Critical Introduction to Testimony.Social Epistemology Review and Reply Collective 4, no. 9 (2015): 11-12.

The PDF of the article gives specific page numbers. Shortlink:


Image Credit: Oxford University Press; Bloomsbury Academic

A Critical Introduction to Testimony
Axel Gelfert
Bloomsbury, 2014
264 pp.

Knowledge on Trust
Paul Faulkner
Oxford University Press, 2011
240 pp.

If you are hungry for some reading on testimonial epistemology—the study of knowledge created and gained through testimony—then Axel Gelfert’s introductory text, A Critical Introduction to Testimony (2014), sits as a perfect entrée to Paul Faulkner’s Knowledge on Trust (2011). Both are well written and both are aimed at philosophers, though they are very different in style. While Gelfert’s volume is clearly aimed as an upper undergraduate or postgraduate philosophy course text, presenting the reader with a good overview of the field, Faulkner’s work delves into more specificity as it develops a rich theory of how we acquire new knowledge as a result of testimony. And while I am sympathetic to Faulkner’s views on the role of trust as the foundation for testimonial knowledge, I think his discussion on trust is a little quick.  Continue Reading…

Author Information: Richard E. Combes, University of South Carolina Upstate,

Combes, Richard E. “McCraw on the Nature of Epistemic Trust.” Social Epistemology Review and Reply Collective 4, no. 8 (2015): 76-78.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: purplejavatroll, via flickr

In “The Nature of Epistemic Trust”, Benjamin W. McCraw (2015) offers an appealing account of what it means to trust someone epistemically. More than merely the recognition that some state of affairs is the case, epistemic trust includes an affective, non-propositional attitude as well, namely, a strong conviction in the integrity of the one trusted. According to McCraw, if Jones places epistemic trust in Smith that some proposition is true, the following four conditions need to be satisfied:  Continue Reading…

Author Information: Susann Wagenknecht, Aarhus University,

Wagenknecht, Susann. “Four Asymmetries Between Moral and Epistemic Trustworthiness.” Social Epistemology Review and Reply Collective 3, no. 6 (2014): 82-86.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

‪Questions of how the epistemic and the moral, typically conceived of as non-epistemic, are intertwined in the creation and corroboration of scientific knowledge have spurred long-standing debates (see, e.g., the debate on epistemic and non-epistemic values of theory appraisal in Rudner 1953, Longino 1990 and Douglas 2000). To unravel the intricacies of epistemic and moral aspects of science, it seems, is a paradigmatic riddle in the Philosophy and Social Epistemology of Science. So, when philosophers discuss the character of trust and trustworthiness as a personal attribute in scientific practice, the moral-epistemic intricacies of trust are again fascinating the philosophical mind.  Continue Reading…

Author Information: Gloria Origgi, CNRS, Institut Jean Nicod, Paris,

Origgi, Gloria. 2012. Reply to Paul Faulkner’s comments. Social Epistemology Review and Reply Collective 1 (10): 1-3

The PDF of the article gives specific page numbers. Shortlink

Please refer to:

I thank Paul Faulkner for his insightful comments. I am flattered that he found the time to go through my paper so carefully. Yet, I do not know exactly what I am supposed to do now because the paper is already published and his comments are in the style of a competent “referee” — I should have received it before the publication! Also, we are on a blog of social epistemology, discussing epistemic injustice, and we cannot pretend I have studied analytical philosophy at Oxford. Thus, in order to avoid a conversation that involves the biases, the identity prejudices and the epistemic injustices that we are here to debunk, I ask the reader (Paul included) to situate my intervention (and my paper) as coming from an Italian scholar living and working in France for whom English is her third professional language. Among the many epistemic injustices that we commit in academia, one of the strongest is linguistic injustice — a much debated subject at least in continental Europe [1] — and some of my arguments may appear less convincing than those coming from an Oxford educated philosopher because the style of writing and structuring of thoughts we have learned is radically different. Continue Reading…