Archives For collective beliefs

Author Information: Raphael Sassower, University of Colorado, Colorado Springs,

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink:

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons


In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details:


Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Author Information: Matthew R. X. Dentith, Institute for Research in the Humanities, University of Bucharest,

Dentith, Matthew R. X. “Between Forteana and Skepticism: A Review of Bernard Wills’ Believing Weird Things.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 48-52.

The pdf of the article gives specific page references. Shortlink:

Image by David Grant via Flickr / Creative Commons


Sometimes, when it is hard to review a book, it is tempting to turn in some kind of personal reflection, one demonstrates why the reviewer felt disconnected from the text they were reviewing. This review of Bernard N. Wills Believing Weird Things – which I received three months ago, and have spent quite a bit of time thinking about in the interim – is just such a review-cum-reflection, because I am not sure what this book is about, nor who its intended audience is.

According to the blurb on the back Believing Weird Things is a response to Michael Shermer’s Why People Believe Weird Things (Henry Holt and Company, 1997). Shermer’s book is one I know all too well, having read and reread it when I started work on my PhD. At the time the book was less than ten years old, and Shermer and his cohort of Skeptics (spelt with a ‘K’ to denote that particular brand of sceptical thought popular among (largely) non-philosophers in the U.S.) were considered to be the first and final word on the rationality (more properly, the supposed irrationality) of belief in conspiracy theories.

Given I was working on a dissertation on the topic, getting to grips with the arguments against belief in such theories seemed crucial, especially given my long and sustained interest in the what you might call the contra-philosophy of Skepticism, the work of Charles Fort.

Times for the Fortean

Fort (who Wills mentions in passing) was a cantankerous collector and publisher of strange and inconvenient phenomena. His Book of the Damned (Boni and Liveright, 1919) is an early 20th Century litany of things which seemed to fall outside the systemic study of the world. From rains of frogs, to cities floating in the sky, Fort presented the strange and the wonderful, often without comment. When he did dare to theorise about the phenomena he cataloged, he often contradicted his previous theories in favour of new ones. Scholars of Fort think his lack of a system was quite deliberate: Fort’s damned data was meant to be immune to scientific study.

Fort was hardly a known figure in his day, but his work has gained fans and adherents, who call themselves Forteans and engage in the study of Forteana. Forteans collect and share damned data, from haunted physics laboratories, to falls of angel hair. Often they theorise about what might cause these phenomena, but they also often don’t dispute other interpretations of the same ‘damned data.’

John Keel, one of the U.S.’s most famous Forteans (and who, if he did not invent the term ‘Men in Black’ at least popularised their existence), had a multitude of theories about the origin of UFOs and monsters in the backwoods of the U.S., which he liberally sprinkled throughout his works. If you challenged Keel on what you thought was an inconsistency of thought he would brush it off (or get angry at the suggestion he was meant to consistent in the first place).

I was a fan of Forteana without being a Fortean: I fail the Fortean test of tolerating competing hypotheses, preferring to stipulate terms whilst encouraging others to join my side of the debate. But I love reading Forteana (it is a great source of examples for the social epistemologist), and thinking about alternative interpretations. So, whilst I do not think UAP (unexpected aerial phenomena – the new term for UFO) are creatures from another dimension, I do like thinking about the assumptions which drive such theories.

Note here that I say ‘theories’ quite deliberately: any student of Forteana will quickly become aware that modern Forteans (contra Fort himself) are typically very systematic about their beliefs. It is just that often the Fortean is happy to be a systemic pluralist, happily accepting competing or complimentary systems as equally possible.

Weird and Weirder

Which brings me back to Believing Weird Things. The first section concerns beliefs people like Shermer might find weird but Wills argues are reasonable in the context under which they developed. Wills’ interest here is wide, taking in astrology, fairies, and why he is not a Rastafarian. Along the way he contextualises those supposedly weird beliefs and shows how, at certain times or in certain places, they were the product of a systemic study of the world.

Wills points out that a fault of Skepticism is a lack of appreciation for history: often what we now consider rational was once flimflam (plate tectonics), and what was systemic and rational (astrology) is today’s quackery. As Wills writes:

The Ancients do not seem to me to be thinking badly so much as thinking in an alien context and under different assumptions that are too basic to admit evaluation in the ordinary empirical sense (which is not to say they admit of no evaluation whatsoever). Further, there are many things in Aristotle and the Hebrew Bible which strike me as true even though the question of ‘testing’ them scientifically and ‘skeptically’ is pretty much meaningless. In short, the weird beliefs I study are at minimum intelligible, sometimes plausible and occasionally true. [4]

Indeed, the very idea which underpins Shermer’s account, ‘magical thinking,’ seems to fail the skeptical test: why, like Shermer, would you think it is some hardwired function rather than culturally situated? But more importantly, how is magical thinking any different from any other kind of thinking?

This last point is important because, as others have argued (including myself) many beliefs people think are problematic are, when looked at in context with other beliefs, either not particularly problematic, or no more problematic than the beliefs we assume are produced rationally. The Psychology of Religion back in the early 20th Century is a good example of this: when psychologists worried about religious belief started looking at the similarities in belief formation between the religious and the non-religious, they started to find the same kind of ‘errors’ in irreligious people as well.

In the same respect, the work in social psychology on belief in conspiracy theories seems to be suffering the same kind of problem today: it’s not clear that conspiracy theorists are any less (or more) rational than the rest of us. Rather, often what marks out the difference in belief are the different assumptions about how the world is, or how it works. Indeed, as Wills writes:

Many weird ideas are only weird from a certain assumed perspective. This is important because this assumed perspective is often one of epistemic and social privilege. We tend to associate weird ideas with weird people we look down upon from some place of superior social status. [10]

The first section of Believing Weird Things is, then, possibly the best defence of a kind of Fortean philosophy one could hope for. Yet that is also an unfair judgement, because thinking of Believing Weird Things as a Fortean text is just my imposition: Fort is mentioned exactly once, and only in a footnote. I am only calling this a tentatively Fortean text because I am not sure who the book’s audience is. Ostensibly – at least according to the blurb – it is meant to be a direct reply to Shermer’s Why People Believe Weird Things. But if it is, then it is twenty years late: Why People Believe Weird Things was published in 1997.

Not just that, but whilst Believing Weird Things deals with a set of interesting issues Shermer did not cover (yet ought to have), almost everything which makes up the reply to Why People Believe Weird Things is to be found in the Introduction alone. Now, I’d happily set the Introduction as a reading in a Critical Thinking class or elementary Epistemology class. However, I could not see much use in setting the book as a whole.

What’s Normal Anyway?

Which brings us to the second half of Believing Weird Things. Having set out why some weird beliefs are not that weird when thought about in context, Wills sets out his reasons for thinking that beliefs which aren’t – in some sense – considered weird ought to be. The choice of topics here is interesting, covering Islamophobia, white privilege, violence and the proper attitude towards tolerance and toleration in our polities.

But it invites the question (again) of who his intended audience is meant to be? For example, I also think Islamophobia, racism, and violence are deeply weird, and it worries me that some people still think they are sensible responses. But if Wills is setting out to persuade the other half of the debate, the racists, the bigots, and the fans of violence, then I do not think he will have much luck, as his discussions never seem to get much further than “Here are my reckons!”

And some of those reckons really need more arguments in favour of them.

For example, Wills brings out the old canard that religious beliefs and scientific beliefs are one and the same (presented as ‘religious faith’ and ‘scientific faith’). Not just that, but, in chapter 6, he talks about the things ‘discovered’ by religion. These are presented as being en par with discoveries in the sciences. Yet aren’t the things discovered by religion (‘humans beings must suffer before they learn. … existence is suffering’ [48]) really the ‘discoveries’ of, say, philosophers working in a religious system? And aren’t many of these discoveries just stipulations, or religious edicts?

This issue is compounded by Wills specification that the process of discovery for religious faith is hermeneutics: the interpretation of religious texts. But that invites even more questions: if you think the gods are responsible for both the world and certain texts in the world you could imagine hermeneutic inquiry to be somehow equivalent to scientific inquiry, but if you are either doubtful of the gods, or doubtful about the integrity of the gods’ prophets, then there is much room to doubt there is much of a connection at all between ‘faith’ in science and faith in scripture.

Another example: in chapter 8, Wills states:

Flat-Earthers are one thing but Birthers, say, are quite another: some ideas do not come from a good place and are not just absurd but pernicious. [67]

Now, there is an argument to be had about the merits (or lack thereof) of the Flat Earth theory and the thesis Barack Obama was not born in the U.S. Some might even claim that the Flat Earth theory is worse, given that belief might entail thinking a lot of very disparate institutions, located globally, are in on a massive cover-up. The idea Barack Obama is secretly Kenyan has little effect on those of us outside the U.S. electoral system.

None of this is to say there aren’t decent arguments to be had about these topics. It is, instead, to say that often these positions are stipulated. As such, the audience for Believing Weird Things seems to be people who agree with Wills, rather than an attempt by Wills to change hearts and minds.

How to Engage With Weird Beliefs

Which is not to say that the second half of the book lacks merit; it just lacks meat. The chapters on Islamophobia (chapter 8) and racism (chapter 9) are good: the contextualisation of both Islamophobia and the nature of conflicts in the Middle East are well expressed. But they are not particularly novel (especially if you read the work of left-wing commentators). But even if the chapters are agreeable to someone of a left-wing persuasion, all too often the chapters just end: the chapter on violence (chapter 10), for example, has no clear conclusion other than that violence is bad.

Similarly confused is the chapter on tolerance (chapter 11). But the worst offender is the chapter on the death of Conservatism (chapter 14). This could have been an interesting argument about the present state of today’s politics. But the chapter ends abruptly, and with it, the book. There is no conclusion, no tying together of threads. There’s hardly even any mention of Shermer or skepticism in the second half of Believing Weird Things.

Which brings us back to the question: who is this book for? If the book were just the first half it could be seen as both a reply to Shermer and a hesitant stab at a Fortean philosophy. But the second half of the book comes across more as the author’s rumination on some pertinent social issues of the day, and none of that content seems to advance far beyond ‘Here are my thoughts…’

Which, unfortunately, is also the character of this review: in trying to work out who the book is for I find my thoughts as inconclusive as the text itself. None of this is to say that Believing Weird Things is a bad or terrible book. Rather, it is just a collection of the author’s ruminations. So, unless you happen to be a fan of Wills, there is little to this text which substantially advances the debate over belief in anything.

Contact details:


Fort, Charles. The Book of the Damned, Boni and Liveright, 1919

Shermer, Michael. Why People Believe Weird Things, Henry Holt and Company, 1997

Wills, Bernard N. Believing Weird Things, Minkowski Institute Press, 2018

Author Information: Alfred Moore, University of York, UK,

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink:

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons


In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons


We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details:


Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009,

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (

Author Information: Raimo Tuomela, University of Helsinki,

Tuomela, Raimo. “The Limits of Groups: An Author Replies.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 28-33.

The pdf of the article refers to specific page numbers. Shortlink:

Please refer to:

In their critique Corlett and Strobel (2017) discuss my 2013 book Social Ontology and comment on some of my views. In this reply I will respond to their three central criticisms that I here formulate as follows:[1]

(1) Group members are said in my account to be required to ask for the group’s, thus other members’, permission to leave the group, and this seems to go against the personal moral autonomy of the members.

(2) My account does not focus on morally central matters such as personal autonomy, although it should.

(3) My moral notions are based on a utilitarian view of morality.

In this note I will show that claims (1) – (3) are not (properly) justified on closer scrutiny.

Unity Is What’s Missing In Our Lives

Below I will mostly focus on we-mode groups, that is, groups based on we-thinking, we-reasoning, a shared “ethos”, and consequent action as a unified group.[2] Ideally, such we-mode groups are autonomous (externally uncoerced) and hence free to decide about the ethos (viz. the central goals, beliefs, norms, etc.) of their group and to select its position holders in the case of an organized group. Inside the group (one with freely entered members) each member is supposed to be “socially” committed to the others to perform her part of the joint enterprise. (Intentions in general involve commitment to carry out what is intended).

The members of a we-mode group should be able to count on each other not to be let down. The goal of the joint activity typically will not be reached without the other members’ successful part performances (often involving helping). When one enters a we-mode group it is one’s own choice, but if the others cannot be trusted the whole project may be impossible to carry out (think of people building a bridge in their village).

The authors claim that my moral views are based on utilitarianism and hence some kind of maximization of group welfare instead of emphasizing individual autonomy and the moral rights of individuals.[3] This is a complex matter and I will here say only that there is room in my theory both for group autonomy and individual autonomy.  The we-mode account states what it takes for people to act in the we-mode (see Tuomela, 2013, ch. 2). According to my account, the members have given up part of their individual autonomy to the group. From this follows that solidarity to the other members is important. The members of a paradigmatic we-mode group should not let the others down. This is seen as a moral matter.

The Moral Nature of the Act

As to the moral implications of the present approach, when a group is acting intentionally it is as a rule responsible for what it does. But what can be said about the responsibility of a member? Basically, each member is responsible as a group member and also privately morally responsible for the performance of his part. (He could have left the group or expressed his divergent opinion and reasons.) Here we are discussing the properly moral and not only the instrumental or quasi-moral implications of group action and the members.[4]

A member’s exiting a free (autonomous) group is in some cases a matter for the group to deal with. “What sanctions does a group need for quitting members if it endangers the whole endeavor?” Of course the members may exit the group but then they have to be prepared to suffer the (possibly) agreed-upon sanctions for quitting. Corlett and Strobel focus on the requirement of a permission to leave the group (see pp. 43-44 of Tuomela, 2013). It is up to the group to decide about suitable sanctions. E.g. the members may be expected to follow the majority here. (See ch. 5 of Tuomela, 2013).

Furthermore, those who join the group should of course be clear about what kind of group they are joining. If they later on wish to give up their membership they can leave upon taking on the sanctions, if any, that the group has decided upon. My critics rightfully wonder about the expression “permission to leave the group”. My formulations seem to have misleadingly suggested to them that the members are (possibly) trapped in the we-mode group. Note that on p. 44 of my 2013 book I speak of cases where leaving the group harms the other members and propose that sometimes rather mere informing the members might be appropriate.

How can “permission from the group” be best understood? Depending on the case at hand, it might involve asking the individual members if they allow the person in question to leave without sanctions. But this sounds rather silly especially in the case of large groups. Rather, the group may formulate procedures for leaving the group. This would involve institutionalizing the matter and the possible sanctioning system. In the case of paradigmatic autonomous we-mode groups the exit generally is free in the sense that the group itself rather than an external authority decides about procedures for exiting the group (see appendix 1 to chapter 2 of Tuomela, 2013). However, those leaving the group might have to face group-based sanctions if they by their leaving considerably harm the others.

In my account the members of a well-functioning we-mode group can be said somewhat figuratively to have given up part of their autonomy and self-determination to their we-mode group. Solidarity between the members is important: The members should not let the others down – or else the group’s project (viz. the members’ joint project) will not be successful. This is a non-utilitarian moral matter – the members are to keep together not to let each other down. Also for practical reasons it is desirable that the members stick together on penalty of not achieving their joint goal – e.g. building a bridge in their village.

People do retain their personal (moral) autonomy in the above kind of cases where entering and exiting a we-mode group is free (especially free from external authorities) or where, in some cases, the members have to satisfy special conditions accepted by their group. I have suggested elsewhere that dissenting members should either leave the group or try to change the ethos of the group. As said above, in specific cases of ethos-related matters the members may use a voting method, e.g. majority voting, even if the minority may want to challenge the result.[5]

Questions of Freedom

According to Corlett and Strobel, freedom of expression is largely blocked and the notion of individual autonomy is dubious in my account (see p. 9 of their critical paper). As was pointed out above, the members may leave the group freely or via an agreed-upon procedure. Individual autonomy is thwarted to the extent that is needed for performing one’s part, but such performance is the whole point of participation in the first place. Of course the ethos may be discussed along the way and changes may be introduced if the members or e.g. the majority of them or another “suitable” number of them agree. The members enter the group freely, by their own will and through the group’s entrance procedures and may likewise leave the group through collectively agreed-on procedures (if such exist).

As we know, autonomy is a concept much used in everyday life, outside moral philosophy. In my account it is used in “autonomous groups”, in the simple sense that the group can make its own decisions about ethos, division of tasks, conditions for entering and exiting the group without coercion by an external authority. Basically, only the autonomous we-mode group can, through its members’ decision, make rules for how people are allowed to join or leave the group.[6]

Corlett’s and Strobel’s critique that the members in autonomous we-mode groups have no autonomy (in the moral sense) in my account cannot be directed towards the paradigmatic case of groups with free entrance, where the group members decide among themselves what is to be done by whom and how to arrange for the situation of a member wanting to leave the group, maybe in the middle of a critical situation. Of course, a member cannot always do as he chooses in situations of group action. A joint goal is at stake and one’s letting the others down when they have a good reason to count on one would be detrimental to everyone’s goal achievement. Also, letting the others down is at least socially and morally condemnable.

When people have good reason to drop out, having changed their mind or finding that the joint project is morally dubious, they can exit according to the relevant rules (if such exist in the group). The feature criticized by the present authors that “others’ permission is required” is due to my unlucky formulation. What is meant is that in some cases there should be some kind of procedure in the group for leaving. The group members are socially committed to each other to further the ethos, as well as committed to the ethos. The social commitment has, of course, the effect that each member looks to the others for cooperative actions and attitudes and has a good reason to do so.

My critics suggest that the members should seek support from the others – indeed this seems to be what the assumed solidarity of we-mode groups can be taken to provide. However, what they mean could be a procedure to make the ethos more attractive to them and leading to their renewed support of the ethos, instead of pressuring them to stay in a group with an ethos that no longer interests them. Of course, the ethos may be presented in new ways, but there still may be situations where members want to leave and they have a right to leave following the agreed upon procedures. Informing the group in due time, so that the group can find compensating measures, is what a member who quits can and should minimally do. The authors discuss examples where heads of states and corporations want to resign. It is typically possible to resign according the group’s exit rules, if such exist.

Follow the Leader

On page 11 the authors criticize the we-mode account for the fact that non-operative members ought to accept what the operative leaders decide. They claim that e.g. a state like the U.S., on the contrary, allows, and in some situations, even asks the citizens to protest. They are, of course, right in their claims concerning special cases. Naturally there will sometimes be situations where protest is called for. The dissidents may then win and the government (or what have you) will change its course of action. Even the ethos of the group may sometimes have to be reformulated.

Gradual development occurs also in social groups and organizations, the ethos evolves often through dissident actions. When the authorized operatives act according to what they deem to be a feasible way, they do what they are chosen to do. If non-operatives protest due to immoral actions of the operatives, they do the right thing morally, but if the operatives act according to the ethos, they are doing their job, although they should have chosen a moral way to achieve the goal. The protest of the non-operatives may have an effect. On the other hand, note that even Mafia groups may act in the we-mode and do so in immoral ways, in accordance to their own agenda.

The authors discuss yet another kind of example of exiting the group, where asking permission would seem out of place: a marriage. If a married couple is taken to be a we-mode group, the parties would have to agree upon exit conditions (if marriage is not an institutionalized and codified concept – what it, nevertheless, usually is). As an institution it is regulated in various ways depending on the culture. The summarized critique by the authors on page 12 has been met this far. It seems that they have been fixated on the formulation that “members cannot leave the group without the permission from the other members.” To be sure, my view is that group members cannot just walk out on the others without taking any measures to ease the detrimental effects of their defection. Whether it is permission, compensation or an excuse, depends on the case. In protesting we have a different story: Dissidents often have good reasons to protest, and sometimes they just want to change the ethos instead of leaving.

It’s Your Prerogative

At the end of their critique the authors suggest that I should include in my account a moral prerogative for the members to seek the support of other group members as a courtesy to other members and the group. I have no objection to that. Once more, the expression “permission to leave the group” has been an unfortunate choice of words. It would have been better e.g. to speak of a member’s being required to inform the others that one has to quit and be ready to suffer possible sanctions for letting the others down and perhaps causing the whole project to collapse.

However, dissidents should have the right to protest. Those who volunteer to join a group with a specific ethos cannot always foresee if the ethos allows for immoral or otherwise unacceptable courses of action. Finally, my phrase “free entrance and exit” may have been misunderstood. As pointed out, the expression refers to the right of the members to enter and exit instead of being forced to join a group and remain there. To emphasize once more, it is in this way that the members of we-mode groups are autonomous.  Also, there is no dictator who steers the ethos formation and choice of position holders. However, although the members may jointly arrange their group life freely, each member is not free to do whatever he chooses when he acts in the we-mode. We-mode acting involves solidary collective acting by the members according to the ethos of the group.

In this note I have responded to the main criticisms (1)-(3) by Corlett and Strobel (2017) and argued that they do not damage my theory at least in a serious way. I wish to thank my critics for their thoughtful critical points.

Contact details:


Corlett, A. and Strobel J., “Raimo Tuomela’s Social Ontology”, Social Epistemology 31, no. 6.  (2017): 1-15

Schmid, H.-B. “On not doing one’s part.” Pp. 287-306, in Psarros, N., Schule-Ostermann, K. (eds.) Facets of Sociality. Frankfurt: Ontos Verlag, 2007

Tuomela, R. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford: Stanford University Press, 1995.

Tuomela, R. The Philosophy of Sociality, Oxford: Oxford University Press, 2007.

Tuomela, R. Social Ontology, New York: Oxford University Press, 2013.

Tuomela, R. and Mäkelä, P. “Group agents and their responsibility.” Journal of Ethics 20. (2016): 299-316

Tuomela, R. and Tuomela, M. “Acting As a Group Member and Collective Commitment”, Protosociology 18, (2003): 7-65.

[1] Acknowledgement. I wish to thank my wife Dr. Maj Tuomela for important help in writing this paper.

[2] See Tuomela (2007) and (2013) for the above notions.

[3] I speak of utilities only in game-theoretic contexts. (My moral views are closer to pragmatism and functionalism than utilitarianism.)

[4] See e.g. Tuomela-Mäkelä (2016) for a group’s  and group members’ moral responsibility, Also see pp. 37 and 41 of Tuomela (2013) and chapter 10 in Tuomela (2007).

[5] As to dissidents I have discussed the notion briefly in my 1995 book and in a paper published in 2003 with Maj Tuomela (see the references). Furthermore, Hans Bernhard Schmid discusses dissidents in we-mode groups in his article “On not doing one’s part” in Psarros and Schulte-Ostermann (eds.) Facets of Sociality, Ontos Verlag, 2007, pp. 287-306.

[6] Groups that are dependent on an external agent (e.g. a dictator, an owner of a company or an officer commanding an army unit) may lack the freedom to decide about what they should be doing, which positions they should have, and the members may be forced to join a group that they cannot exit from. My notion of “autonomous groups” refers to groups that are free to decide about their own matters, e.g. entrance and exit (possibly including sanctions). Personal moral autonomy in such groups is retained by the possibility to apply for entrance and exit upon taking on possible sanctions, influencing the ethos or protesting. The upshot is that a person functioning in a paradigmatic we-mode group should obey the possible restrictions that the group has set for exiting the group and be willing to suffer agreed upon sanctions. Such a we-mode group is assumed to have coercion-free entrance to the group and also free exit from it – as specified in Appendix 1 to Chapter 2 of my 2013 book. Here is meant that no external authority is coercing people to join and to remain in the group. A completely different matter is the case of a Mafia group and an army unit. The latter may be a unit that cannot be freely entered and exited. Even in these cases people may act in the we-mode. In some non-autonomous groups, like in a business company, the shareholders decide about all central matters and the workers get paid. Members may enter if they are chosen to join and exit only according to specific rules.

Author Information: Line Edslev Andersen, Vrije Universiteit Brussel,

Andersen, Line Edslev. “Community Beliefs and Scientific Change: Response to Gilbert.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 37-46.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: NASA Goddard Space Flight Center, via flickr

Margaret Gilbert (2017) has provided an engaging response to my paper on her account of joint commitment and scientific change (Andersen 2017). Based on Donald MacKenzie’s (1999) sociohistory of a famous mathematical proof, my paper offered an argument against her account of why a scientist’s outsider status can be effective in enabling scientific change (Gilbert 2000). On her account, scientists have collective beliefs in the sense of joint commitments to particular beliefs. The term ‘collective belief’ is used in this sense in the present paper.[1] When a group of scientists are jointly committed to some belief, they are obligated not to call it into question. According to Gilbert, this makes joint commitments work as a brake on scientific change and gives outsiders an important role in science. Since outsiders to a given scientific community are party to no or relatively few joint commitments of that community, they are less constrained by them. For this reason, outsiders play a central role in bringing about scientific change.

I argued that Gilbert’s account is inherently difficult to test because it requires data that are hard to interpret. At the same time, I pointed out that we have available a simpler explanation of why a scientist’s outsider status can be effective in enabling scientific change: During their education and training, scientists learn to see things in certain ways. If solving some problem requires one to look at things in a different way, scientists with a different educational background will have an advantage.[2] I have become aware that Melinda Fagan (2011, 255-256) also compares these two explanations.[3]

Gilbert’s response to my paper has two main parts. In the first part (Gilbert 2017, 46-48), she argues for the role of collective beliefs in science by considering the role of collective beliefs in everyday life and in the context of education. I discuss her argument in the next section. In the other part (48-49), she suggests that collective beliefs of scientific communities can help explain why some personal beliefs become deeply entrenched in the minds of scientists. I agree with this. Deborah Tollefsen has made the related suggestion that one could respond to my argument by claiming that the degree to which some personal beliefs are entrenched in the minds of scientists cannot be explained without collective beliefs of scientific communities. This is an interesting suggestion. Here, however, I take a simpler approach to the question of whether scientific communities have collective beliefs.[4]

As mentioned, the remainder of this paper begins with a discussion of Gilbert’s argument (section 1). I then address the question of whether (section 2) and to what extent (section 3) scientific communities in particular (as opposed to, for example, research teams) have collective beliefs. On this basis, I assess the potential of collective beliefs to work as a brake on scientific change (section 4).

Collective Beliefs in Science

On Gilbert’s account, a collective belief is a joint commitment to believe some proposition p (e.g., Gilbert 1987). A joint commitment to believe p is the commitment of a group as one body to believe p; i.e., it is the commitment of a group to emulate, by virtue of the actions of all, a single believer of p. Group members are thus “to speak and act as if they are of ‘one mind’ on the subject” (Gilbert 2017, 46). According to Gilbert, this implies that the joint commitment to believe p is persistent in the sense that it can only be rescinded with the concurrence of all the parties (Gilbert 2014, 118). Each of them has an obligation towards the others to act in accordance with the joint commitment to believe p and not, for example, express contrary beliefs. If someone violates the joint commitment, the others gain the standing to rebuke her and may even ostracize her (Gilbert 2000, 40). By virtue of these features, collective beliefs can act as powerful behavioral constraints and work as a brake on scientific change on Gilbert’s account. Speaking about scientists’ collective beliefs, she thus writes that they can have as far-reaching consequences as “inhibiting one from pursuing spontaneous doubt about the group view, inclining one to ignore evidence that suggests the falsity of that view, and so on” (Gilbert 2000, 44-45).

While collective beliefs can be very persistent, they are quite easily formed. A joint commitment to believe p can be formed without all or most or even any of the group members personally believing p. What matters is that they have expressed their personal willingness to let p stand as a belief of the group—if only tacitly—and this is common knowledge between them. On Gilbert’s (2017, 46-48) account, this happens all the time in everyday life and in the context of education. One of the examples she gives in her response to my paper is that of two people having an informal conversation. One of them says “What a lovely day!” and the other responds “Yes, indeed!” This establishes the belief that it is a lovely day as a collective belief the two have. Gilbert reasons that, if this is all it takes for a collective belief to form, they must play a role in science as well: “If collective beliefs are prevalent in human life generally, and if, in particular, they are the predictable outcome of conversations and discussions on whatever topic, we can expect many collective beliefs to be established among scientists in the various specialties as they talk about their work in small and large groups” (Gilbert 2017, 47).

I agree with Gilbert that, if collective beliefs play this type of role in everyday life, they must play a role in science. I am also convinced by her claim that joint commitments are generally ubiquitous. However, when Gilbert describes how easily collective beliefs are formed in everyday life and in the context of education, she gives examples of smaller groups forming collective beliefs, such as the group of people attending a meeting of a large literary society or a student and a teacher having an interchange. By contrast, when she, I, and others discuss the potential of collective beliefs to work as a brake on scientific change, we are referring to collective beliefs of whole communities. This is relevant, since there seems to be a difference between how easily collective beliefs of smaller groups in science (such as research teams) and scientific communities are established.[5] In fact, I argue below that it is at least rare for scientific communities to form collective beliefs. This is where I disagree with Gilbert.

As the previous work on the potential of collective beliefs to work as a brake on scientific change, the present paper focuses on collective beliefs of whole communities. I thus leave open the possibility that collective beliefs of smaller groups in science can work as a brake of scientific change. However, Hanne Andersen and I have examined the instability of joint commitments of smaller groups and argue that they can rather easily be dissolved (Andersen and Andersen 2017).[6] This limits the potential of collective beliefs of smaller groups in science to work as brakes on scientific change.

The Existence of Community Beliefs

Having explained Gilbert’s account of collective belief, I will now examine the question of whether communities in science have collective beliefs. A consensus established at a consensus development conference is a good candidate for being a collective belief of a scientific community. Paul Thagard (1998a, b, 1999) attended the 1994 consensus conference on methods of diagnosing and treating ulcers as part of his work on the bacterial theory of ulcers.[7] In a later paper, Thagard (2010, 280) addresses Gilbert’s account of collective belief, stating that collective beliefs of scientific communities strike him as “rather rare,” but that he believes consensus conferences establish such collective beliefs. Consensus conferences are themselves rare and in most disciplines non-existent, but in medical research, Thagard explains, “the need for a consensus is much more acute, since hypotheses such as the bacterial theory of ulcers have direct consequences for the treatment of patients” (1998b, 335).

The consensus conference Thagard attended was conducted by the U.S. National Institutes of Health. He describes the purpose of their consensus conferences to be “to produce consensus statements on important and controversial statements in medicine” that are useful to the public and health professionals (1998b, 335).[8] A consensus statement is prepared by a panel of experts after deliberation. Most likely the members of the panel do not all personally agree with everything in the statement given the controversial nature of the subject, but the statement expresses the view that they have agreed to let stand as the view of the panel.[9] In this paper, I assume that when members of a group agree to let a view stand as the view of the group, a joint commitment is involved, so the view of the panel involves a joint commitment. It is, in other words, a collective belief.

The question I am interested in here is whether a consensus statement sometimes expresses not only the collective belief of the consensus development panel, but the collective belief of a whole community of scientists. Let us consider the consensus conference Thagard attended. The consensus development panel was chosen to represent a community—an appropriately delineated medical research community—in the following sense. Its members were chosen by the planning committee whose chair is required to be an authority, “a knowledgeable and prestigious medical figure,” and to be neutral in the sense of ‘not identified with strong advocacy of the conference topic or with relevant research’ (Thagard 1998b, 336). The fourteen people on the consensus development panel were chosen for various kinds of expertise and for their neutrality in the stated sense. Finally, the statement of the panel was based on presentations at the public consensus conference by 22 researchers representing different points of view; contributions from conference attendees during open discussion periods; and closed deliberations within the panel. Sometimes, although apparently not in this case, a draft statement is published online for public comment (e.g., NN 2013, 1).

The first page of the statement tells us that it “provides a ‘snapshot in time’ of the state of knowledge on the conference topic,” implying that these are early times and work remains to be done (NN 1994). This proviso limits the potential of the collective belief expressed in the statement to work as a brake on scientific change, for it must limit the ability of the collective belief to incline scientists to ignore evidence that suggests the falsity of the belief. But the proviso does not lessen the potential of the collective belief for being the collective belief of a whole community. It seems to me plausible to say that the members of the community in question in 1994 expressed their willingness (most of them tacitly) to let a belief stand as the belief of the community at that point in time.[10] This is due to the relative neutrality of the panel, the diversity of the speakers, and the fact that members of the community are given the opportunity to have their voice heard.

A similar point can be made about certain group views that are established in a similar way, but are about something else. I have in mind certain codes for responsible conduct of research. For example, the European Mathematical Society (EMS) introduced a Code of Practice in 2012 (NN 2013, 12). This code may be said to express the view of the EMS in a way similar to how the 1994 consensus statement may be said to express the view of a community of medical researchers. The Code of Practice was prepared by the Ethics Committee of the EMS and approved by the EMS council, which in total consists of about 100 member-elected “delegates from all of the national societies which are members of the EMS” and “delegates representing the individual members of the Society” ( The code will apparently be considered for revision every three years in light of comments received by the chair of the ethics committee from members of the EMS.

While I have addressed the question of whether communities of scientists have collective beliefs, Wray (2007) addresses the broader question of whether they have beliefs in a non-summative sense. When a group believes p in a summative sense it just amounts to all or most of the group members personally believing p. Wray argues that scientific communities, as opposed to research teams, are not capable of having beliefs in a non-summative sense. He uses Emile Durkheim’s distinction between societies characterized by organic solidarity and societies characterized by mechanical solidarity (Wray 2007, 341-342). Wray writes that a group or community is characterized by organic solidarity when its members “depend upon the proper functioning of the other members” (Wray 2007, 342), as the parts of an organism depend on each other, and are organized so as to advance a goal. Groups that are not bound together by organic solidarity, are bound together by similar thoughts and attitudes; by mechanical solidarity. Wray argues that a group must be cohesive in the sense of being characterized by organic solidarity to be capable of having beliefs in a non-summative sense and that scientific specialty communities and the scientific community as a whole are not cohesive in this sense.

If consensus conferences produce group beliefs that can properly be described as beliefs of whole communities, they are strictly speaking inconsistent with Wray’s account. These beliefs would then be produced by community acts characterized by organic solidarity, but such acts seem to be exceptional and do not speak against the claim that communities in science are generally characterized by mechanical solidarity. On Gilbert’s account, the non-summative group beliefs Wray discusses imply joint commitments. Wray’s account is neutral on this question. His is an argument that communities in science do not form non-summative beliefs in general. It thus implies that they do not form joint commitments to beliefs.[11] In the next section, I give an additional argument for this particular conclusion.

The Frequency of Community Beliefs

In their everyday practice, scientists often express a view of a research team they are part of, for example in publications, in conference presentations, or in conversation with other scientists. They less frequently express a view of one of the communities they belong to, regardless of how we conceive ‘community view’ here. It seems to be rather rare that the typical scientist is prompted to say, “We as a scientific community believe…” That the motivation to express group views is relatively low at the community level may suggest that the motivation to actively establish group views at the community level is relatively low as well. However, the following argument does not depend on it.

The argument focuses on collective scientific beliefs of communities, since these are the ones that work as brakes on scientific change on Gilbert’s account. In the next section, I return to the topic of community beliefs about responsible conduct of research. So let us consider a Gilbertian scientific community belief p that has just been formed. p would have to be somehow broadly relevant in the community; the community members must, after all, be aware of a joint commitment to believe p for there to be one such. Furthermore, in order to agree to let a belief stand as the belief of the group, the group members must have some motivation to do so. They must do so as a means to realizing a goal (see Wray 2001). Sometimes the members of a community will be motivated to let a belief stand as the belief of the community although they personally have very different beliefs on the matter. For example, in the above case of the consensus conference, there was a need to present a community belief to the public and health professionals. But this is rare.

If the proposition p is broadly relevant in the community and the community belief that p is “unforced” in the sense that it has not been formed quickly under external pressure from the public or others, experts will have discussed and tested whether p until there is broad agreement among them. If the experts broadly agree that p is well established by the evidence, the other community members are likely to believe p because the experts do. Hence, at the time of being established, the collective belief p will reflect what a large majority of the community members personally believe, except in rare cases similar to the consensus conference example considered above. This fits well with Gilbert’s (2000) account of collective beliefs and scientific change. The negative potential of collective beliefs, as she describes it, is not associated with their being established in spite of recalcitrant evidence, but by their being maintained in spite of recalcitrant evidence discovered later.

This raises the question of how members can be motivated to jointly commit to a belief they already broadly share. There is already a community belief that p (albeit in a summative sense) that can be presented as such to the public and others. But there may be a reason internal to the community for making the joint commitment. Kristina Rolin (2008) raises the general question of how the members of a community are motivated to jointly commit to beliefs. She argues that community members are motivated to jointly commit to background assumptions because individuals can then use these assumptions and remain epistemically responsible even when they do not have the expertise to defend them if they are challenged. They can do so because the joint commitments obligate the relevant experts in the community to defend the assumptions if they are appropriately challenged. As implied by the above, I am unconvinced by Rolin’s premise that all the members of a community would be prepared to jointly commit to the same mere assumption, especially given the obligations and constraints this implies on Gilbert’s account. It is unclear how they would determine which background assumptions to commit to.

That it is unclear if and how community members would be motivated to jointly commit to believe p is a serious challenge to the claim that scientific communities form unforced collective beliefs. I believe the challenge may well be insurmountable. But even if we assume that community members are motivated to (and do) form unforced collective beliefs, we have a problem if we want to establish that such community beliefs work as a brake on scientific change. Recall that p is broadly relevant in the community in addition to being widely believed by the community members to be well established by the evidence. Hence, p expresses the sort of view that would make its way into textbooks for students or young researchers entering the subdiscipline[12] or be used widely in further research. If the unforced collective beliefs of scientific communities have the characteristics of being broadly relevant and widely believed, at least for a while, it will be hard to test whether they work as a brake on scientific change. For much relies on such a belief, so it will be unpleasant for community members if recalcitrant evidence turns up regardless of whether they are jointly committed to the belief. Recalcitrant evidence is thus likely to be met with some skepticism or resistance whether a joint commitment is in place or not. Hence, if joint commitments make it harder than it would already be to abandon such views, it will be hard to detect.

Fagan (2011) defends a similar conclusion. She criticizes certain explanatory arguments that scientific groups have collective beliefs: It is not the case, she argues, that collective beliefs can explain certain phenomena in science—the inertia of science and the stability of groups in science—that cannot be explained just as well by other means.[13]

Community Beliefs: Brakes on Scientific Change?

If my argument is correct, joint commitments of communities in science are rare and without much potential for working as brakes on scientific change. Collective beliefs developed at consensus conferences appear to sometimes be collective beliefs of whole communities. But, as explained above, their potential for working as a brake on science is limited by the fact that they are very rare and that at least some consensus statements come with the proviso that this is our view at this point in time. Codes for responsible conduct of research may also be good candidates for collective beliefs of communities. But if a community has a collective belief on what are responsible research practices, this limits the potential of any collective scientific belief of that community to work as a brake on scientific change. If evidence recalcitrant to the collective scientific belief turns up, the members of the community are forced to violate one of two joint commitments. By ignoring the recalcitrant evidence, they violate their collective belief about responsible research practices, by doing the opposite, they violate their collective scientific belief. In such cases, we have no argument why collective beliefs work as a brake of scientific change by making scientists ignore recalcitrant evidence, unless we can argue that the collective scientific beliefs of a given community are somehow harder to violate than the other joint commitments of the community. Hanne Andersen and I (2017) argue that there are cases in which participants can, due to changes in circumstances, violate a joint commitment without risking rebuke and that this is a major source of instability of collective beliefs and joint commitments in general. We would expect tension between collective beliefs due to changes in circumstances to be another major source of instability of collective beliefs.

If we instead assume that collective beliefs of communities are as ubiquitous as Gilbert claims, norms in general are also collective beliefs (Gilbert 1999). But then norms in the scientific community and scientific subcommunities, such as the norm of sharing counterevidence with one’s colleagues, are also collective beliefs. If members of a scientific community discover evidence that goes against one of their collective scientific beliefs, they are thus forced to violate either the collective scientific belief or a collective belief about responsible conduct of research. The relevant collective belief(s) about responsible conduct of research may be held by the community in question, the scientific community as a whole, or both. Hence, unless it can be argued that the collective scientific beliefs of a given community are harder to violate than other collective beliefs the community members are party to, it is not clear that collective scientific beliefs, even if ubiquitous, will work as a brake on scientific change.

I would like to end by thanking Gilbert for her inspiring work, which I continue to explore.

Acknowledgements: The author thanks K. Brad Wray for helpful feedback on an earlier draft.


Andersen, Hanne. “Joint Acceptance and Scientific Change: A Case Study.” Episteme 7, no. 3 (2010): 248–265.

Andersen, Line Edslev. “Outsiders Enabling Scientific Change: Learning from the Sociohistory of a Mathematical Proof.” Social Epistemology 31, no. 2 (2017): 184–91.

Andersen, Line Edslev, and Hanne Andersen. “The Stability and Instability of Joint Commitment.” Submitted. 2017b.

Bird, Alexander. “Social Knowing: The Social Sense of ‘Scientific Knowlegde’.” Philosophical Perspectives 24, no. 1 (2010): 23–56.

Bouvier, Alban. “Individual Belief and Collective Beliefs in Science and Philosophy: The Plural Subject and the Polyphonic Subject Accounts.” Philosophy of the Social Sciences 34, no. 3 (2004): 382–407.

Cheon, Hyundeuk. “In What Sense is Scientific Knowledge Collective Knowledge?” Philosophy of the Social Sciences 44, no. 4 (2014): 407–423.

de Ridder, Jeroen. “Epistemic Dependence and Collective Scientific Knowledge.” Synthese 191, no. 1 (2014): 37–53.

Dragos, Chris. “Which Groups Have Scientific Knowledge? Wray vs. Rolin.” Social Epistemology 30, no. 5–6 (2016a): 611–623.

Dragos, Chris. “Justified Group Belief in Science.” Social Epistemology Review and Reply Collective 5, no. 9 (2016b): 6–12.

Fagan, Melinda Bonnie. “Is There Collective Scientific Knowledge? Arguments from Explanation.” The Philosophical Quarterly 61, no. 243 (2011): 247–269.

Gilbert, Margaret. “Modelling Collective Belief.” Synthese 73, no. 1 (1987): 185–204.

Gilbert, Margaret. “Social Rules: Some Problems for Hart’s Account, and an Alternative Proposal.” Law and Philosophy 18, no. 2 (1999): 141–171.

Gilbert, Margaret. “Collective Belief and Scientific Change.” In Sociality and Responsibility, edited by Margaret Gilbert, 37–49. Lanham: Rowman & Littlefield, 2000.

Gilbert, Margaret. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press, 2014.

Gilbert, Margaret. “Scientists Are People Too: Comment on Andersen.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 45–49.

MacKenzie, Donald. “Slaying the Kraken: The Sociohistory of a Mathematical Proof.” Social Studies of Science 29, no. 1 (1999): 7–60.

NN. “Helicobacter pylori in Peptic Ulcer Disease.” NIH Consensus Statement 12, no. 1 (Feb. 7–9, 1994): 1–22.

NN, “Diagnosing Gestational Diabetes Mellitus.” NIH Consensus Statement 29, no. 1 (March 4–6, 2013): 1–30.

NN. “Code of Practice.” Newsletter of the European Mathematical Society 87 (March 2013): 12–15.

Rolin, Kristina. “Science as Collective Knowledge.” Cognitive Systems Research 9, no. 1–2 (2008): 115–124.

Staley, Kent W. “Evidential Collaborations: Epistemic and Pragmatic Considerations in ‘Group Belief’.” Social Epistemology 21, no. 3 (2007): 321–35.

Thagard, Paul. “Ulcers and Bacteria I: Discovery and Acceptance.” Stud. Hist. Phil. Biol. & Biomed. Sci. 29, no. 1 (1998a): 107–136.

Thagard, Paul. “Ulcers and Bacteria II: Instruments, Experiments, and Social Interactions.” Stud. Hist. Phil. Biol. & Biomed. Sci. 29, no. 2 (1998b): 317–342.

Thagard, Paul. How Scientists Explain Disease. Princeton: Princeton University Press, 1999.

Thagard, Paul. “Explaining Economic Crises: Are There Collective Representations?” Episteme 7, no. 3 (2010): 266–283.

Tollefsen, Deborah, and Rick Dale. “Naturalizing Joint Action: A Process-Based Approach.” Philosophical Psychology 25, no. 3 (2012): 385-407.

Tossut, Silvia. “Which Groups Have Scientific Knowledge? A Reply to Chris Dragos.” Social Epistemology Review and Reply Collective 5, no. 7 (2016): 18–21.

Weatherall, James Owen, and Margaret Gilbert. “Collective Belief, Kuhn, and the String Theory Community.” In The Epistemic Life of Groups: Essays in the Epistemology of Collectives, edited by Michael S. Brady and Miranda Fricker, 191–217. Oxford: Oxford University Press, 2016.

Wray, K. Brad. “Collective Belief and Acceptance.” Synthese 129, no. 3 (2001): 319–333.

Wray, K. Brad. “Who Has Scientific Knowledge?” Social Epistemology 21, no. 3 (2007): 335–345.

Wray, K. Brad. “Collective Knowledge and Collective Justification.” Social Epistemology Review and Reply Collective 5, no. 8 (2016): 24–27.

Wray, K. Brad. “The Impact of Collaboration on the Epistemic Cultures of Science.” In Scientific Collaboration and Collective Knowledge, edited by Thomas Boyer-Kassem, Conor Mayo-Wilson, and Michael Weisberg. Forthcoming from Oxford University Press, 2017.

[1] This is not necessarily a fully appropriate term. There has been some debate on whether groups can be jointly committed to beliefs or whether they can merely be jointly committed to accept claims, a debate started by K. Brad Wray (2001). This paper is neutral on this question.

[2] For convenience, I speak of this as the Kuhnian explanation in the paper, but the paper is not intended as a comparison of Gilbert’s account with Kuhn’s account. I do not mean to argue that we should choose Kuhn’s whole theory of scientific change over Gilbert’s theory of scientific change. This is not clearly stated in the paper. I thank Deborah Tollefsen for pointing this out to me. I do think the question, addressed in Weatherall and Gilbert 2016, of how the work of Gilbert relates to that of Kuhn is an important one.

[3] In her paper, they are compared as alternative explanations of the effects of the dogma of reproductive biology that there is no cell renewal in the ovary.

[4] This question was also examined by Rolin 2008. Wray (2007) started a discussion of the general ability of scientific communities to hold views (Rolin 2008; Cheon 2014; Dragos 2016a, b; Tossut 2016; Wray 2016).

[5] For case studies that support the view that smaller groups in science form collective beliefs, see Bouvier 2004; Staley 2007; and Andersen 2010. For a promising approach to test whether joint commitments exist, see Tollefsen and Dale 2012, which gives an account of how empirical research in cognitive science is important to understanding the nature of shared intention.

[6] Gilbert herself has been focusing on the persistence of joint commitments and written very little about the sense in which they lack persistence, but acknowledges that this is an important topic (Gilbert 2014, 32).

[7] Gilbert’s (2000, 47) first paper on the role of collective beliefs in science was prompted by this work.

[8] This work has recently been taken over by others (

[9] One of the phenomena Gilbert tries to make sense of with her account of joint commitment are cases where people have inconsistent beliefs on some matter and nonetheless let a view stand as the view of the group. Kent Staley (2007) shows that the members of a research team can do (and do) this in epistemically rational ways (see also Wray 2017, 118–119). His argument applies equally well to other groups of scientists.

[10] By contrast, Bird 2010, 10, and de Ridder 2014, 41, state that there is no mechanism of Gilbertian community view formation in science.

[11] Hence, I disagree with Hyundeuk Cheon (2014) who argues that Gilbert and Wray speak about two different types of collective belief. On my interpretation, Gilbert and Wray are examining different questions about non-summative belief rather than different types of collective belief: Wray examines what kinds of groups have beliefs in a non-summative sense, while Gilbert examines how groups have beliefs in a non-summative sense (and argues that they do so by virtue of joint commitments).

[12] I mentioned above that scientists rather infrequently express community views, in whatever sense, but they do so in textbooks.

[13] Fagan argues that the existence of collective beliefs of communities is thus hard to test from their consequences. In the previous section, I made a case for the existence of collective beliefs of scientific communities by looking at how consensus is established at consensus conferences. But this is also hard in the case of “non-forced” collective beliefs of communities. It is not clear where we have to look to observe the process by which they are established. They are likely not established at a single event. It is also harder to see whether a group belief is a collective belief when it, judging from the personal beliefs of the group members at the time of its establishment, could just as well be a mere summative belief.