Archives For community beliefs

Author Information: Fabien Medvecky, University of Otago, fabien.medvecky@otago.ac.nz.

Medvecky, Fabien. “Institutionalised Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 15-20.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-46m

A graffiti mural that was, and may even still be, on Maybachufer Strasse in Kreuzberg, Berlin.
Image by Igal Malis via Flicker / Creative Commons

 

This article responds to Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

In a recent paper, I argued that science communication, the “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science”, is epistemically unjust (Medvecky, 2017). Matheson and Chock disagree. Or at least, they disagree with enough of the argument to conclude that “while thought provoking and bold, Medvecky’s argument should be resisted” (Matheson & Chock, 2019). This has provided me with an opportunity to revisit some of my claims, and more importantly, to make explicit those claims that I had failed to make clear and present in the original paper. That’s what this note will do.

Matheson and Chock’s concern with the original argument is two-fold. Firstly, they argue that the original argument sinned by overreaching, and secondly, that while there might be credibility excess, such excess should not be viewed as constituting injustice. I’ll begin by outlining my original argument before tackling each of their complaints.

The Original Argument For the Epistemic Injustice of Science Communication

Taking Matheson and Chock’s formal presentation of the original argument, it runs as follows:

1. Science is not a unique and privileged field (this isn’t quite right. See below for clarification)

2. If (1), then science communication creates a credibility excess for science.

3. Science communication creates a credibility excess for science.

4. If (3), then science communication is epistemically unjust.

5. Science communication is epistemically unjust.

The original argument claimed that science was privileged in the way that its communication is institutionalised through policy and practices in a way not granted to other fields, and that fundamentally,

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment. This uniqueness creates a credibility excess for science as a field. (italic added)

Two clarificatory points are important here. Firstly, while Matheson and Chock run with premise 1, they do express some reservation. And so would I if this were the way I’d spelled it out. But I never suggested that there is nothing unique about science. There undoubtedly is, usually expressed in terms of producing especially reliable knowledge (Nowotny, 2003; Rudolph, 2014).

My original argument was that this isn’t necessarily enough to warrant special treatment when it comes to communication. As I stated then, “What we need is a reason for why reliable knowledge ought to be communicated. Why would some highly reliable information about the reproductive habits of a squid be more important to communicate to the public than (possibly less reliable) information about the structure of interest rates or the cultural habits of Sufis?” (Italic added)

In the original paper, I explicitly claimed, “We might be able to show that science is unique, but that uniqueness does not relate to communicative needs. Conversely, we can provide reasons for communicating science, but these are not unique to science.” (Medvecky, 2017)

Secondly, as noted by Matheson and Chock, the concern in the original argument revolves around “institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe.”

What maybe wasn’t made explicit was the role and importance of this institutionalization which is directed by government strategies and associated funding policies. Such policies are designed specifically and uniquely to increase public communication of and public engagement with science (MBIE, 2014).

They may mention that science should be read broadly, such as the UK’s A vision for Science and Society (DIUS, 2008) which states “By science we mean all-encompassing knowledge based on scholarship and research undertaken in the physical, biological, engineering, medical, natural and social disciplines, including the arts and humanities”. Yet the policy also claims that “These activities will deliver a coherent approach to increasing STEM skills, with a focus on improved understanding of the link between labour market needs and business demands for STEM skills and the ability of the education system to deliver flexibly into the 21st century.”

STEM (science, technology, engineering and mathematics) is explicitly not a broad view of science; it’s specifically restricted to the bio-physical science and associated fields. If science was truly meant broadly, there’d be no need to specify STEM. These policies, including their funding and support, are uniquely aimed at science as found in STEM, and it is this form of institutionalized and institutionally sponsored science communication that is the target of my argument.

With these two points in mind, let me turn to Matheson and Chock’s objections.

The Problem of Overreaching and the Marketplace of Ideas

Matheson and Chock rightly spell out my view when stating that the “fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains.” What they mistake is what I take issue with. Matheson and Chock claim, “When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists”. Of course, who wouldn’t agree with that!

For Matheson and Chock, given their assumption that science communication is equivalent to scientists communicating their science, it follows that it is only reasonable to give special attention to the subject or field one is involved in. As they say,

Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

But they’re mistakenly equating science communication with communication by scientists about their science. This suggests both a misunderstanding of my argument and a skewed view of what science communication is.

To tackle the latter first, while some science communication efforts come from scientists, science communication is much broader. Science communication is equally carried out by (non-scientist) journalists, (non-scientist) PR and communication officers, (non-scientist) policy makers, etc. Indeed, some of the most popular science communicators aren’t scientists at all, such as Bill Bryson. So the concern is not with the bakery privileging baked goods, it’s with baked goods being privileged simpliciter.

As discussed in both my original argument and in Matheson and Chock’s reply, my concern revolves around science communication institutionalized through policies and such like. And that’s where the issue is; there is institutionalised science communication, including policy with significant funding such that there can be specific communication, and that such policies exist only for the sciences. Indeed, there are no “humanities communications” governmental policies or funding strategies, for example. Science communication, unlike Matheson and Chock’s idealised bakery, doesn’t operate in anything like a free market.

Let’s take the bakery analogy and its position it in a marketplace a little further (indeed, thinking of science communication and where it sits in the market place of knowledge fits well). My argument is not that a bakery is being unjust by selling only baked goods.

My argument is that if bakeries were the only stores to receive government subsidies and tax breaks, and were, through governments and institutional intervention, granted a significantly better position in the street, then yes, this is unfair. Other goods will fail to have the same level of traction as baked goods and would be unable to compete on a just footing. This is not to say that the bakeries need to sell other goods, but rather, by benefiting from the unique subsidies, baked goods gain a marketplace advantage over goods in other domains, in the same way that scientific knowledge benefits from a credibility excess (ie epistemic marketplace advantage) over knowledge in other domains.

Credibility Excess and Systemic Injustices

The second main objection raised by Matheson and Chock turns on whether any credibility excess science might acquire in this way should be considered an injustice. They rightly point out that “mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given.”

Specifically, Matheson and Chock argue that for credibility excess to lead to injustice, this must be systemic and carry across contexts. And according to them, science communication is guilty of no such trespass (or, at the very least, my original argument fails to make the case for such).

Again, I think this comes down to how science communication is viewed. Thinking of science communication in institutionalised ways, as I intended, is indeed systemic. What Matheson and Chock have made clear is that in my original argument, I didn’t articulate clearly enough just how deeply the institutionalisation of science communication is, and how fundamentally linked with assumptions of the epistemic dominance of science this institutionalisation is. I’ll take this opportunity to provide some example of this.

Most obviously, there are nationally funded policies that aim “to develop a culture where the sciences are recognised as relevant to everyday life and where the government, business, and academic and public institutions work together with the sciences to provide a coherent approach to communicating science and its benefits”; policies backed by multi-million dollar investments from governments (DIISRTE, 2009).

Importantly, there are no equivalent for other fields. Yes, there are funds for other fields (funds for research, funds for art, etc), but not funds specifically for communicating these or disseminating their findings. And, there are other markers of the systemic advantages science holds over other fields.

On a very practical, pecuniary level, funding for research is rarely on a playing field. In New Zealand, for example, the government’s Research Degree Completion Funding allocates funds to departments upon students’ successfully completing their thesis. This scheme grants twice as much to the sciences as it does to the social sciences, humanities, and law (Commission, 2016).

In practice, this means a biology department supervising a PhD thesis on citizen science in conservation would, on thesis completion, receive twice the fund that a sociology department supervising the very same thesis would receive. And this simply because one field delivers knowledge under the tag of science, while the other under the banner of the humanities.

At a political level the dominance of scientific knowledge is also evident. While most countries have a Science Advisor to the President or Chief Science Advisor to the Prime Minister, there are no equivalent “Chief Humanities Advisor”. And the list of discrepancies goes on, with institutionalised science communication a key player. Of course, for each of these examples of where science and scientific knowledge benefits over other fields, some argument could be made for why this or that case does indeed require that science be treated differently.

But this is exactly why the credibility excess science benefits from is epistemically unjust; because it’s not simply ‘a case here to be explained’ and ‘a case there to be explained’. It’s systemic and carries across context. And science communication, by being the only institutionalised communication of a specific knowledge field, maintains, amplifies, and reinforces this epistemic injustice.

Conclusion

When I argued that science communication was epistemically unjust, my claim was directed at institutionalised science communication, with all its trimmings. I’m grateful to Matheson and Chock for inviting to re-read my original paper and see where I may have failed to be clear, and to think more deeply about what motivated my thinking.

I want to close on one last point Matheson and Chock brought up. They claimed that it would be unreasonable to expect science communicators to communicate other fields. This was partially in response to my original paper where I did suggest that we should move beyond science communication to something like ‘knowledge communication’ (though I’m not sure exactly what that term should be, and I’m not convince ‘knowledge communication’ is ideal either).

Here, I agree with Matheson and Chock that it would be silly to expect those with expertise in science to be obliged to communicate more broadly about fields beyond their expertise (though some of them do). The obvious answer might be to have multiple branches of communication institutionalised and equally supported by government funding, by advisors, etc: science communication; humanities communication; arts communication; etc. And I did consider this in the original paper.

But the stumbling block is scarce resources, both financially and epistemically. Financially, there is a limit to how much governments would be willing to fund for such activates, so having multiple branches of communication would become a deeply political ‘pot-splitting’ issue, and there, the level of injustice might be even more explicit. Epistemically, there is only so much knowledge that we, humans, can process. Simply multiplying the communication of knowledge for the sake of justice (or whatever it is that ‘science communication’ aims to communicate) may not, in the end, be particularly useful without some concerted and coordinate view as to what the purpose of all this communication was.

In light of this, there is an important question for us in social epistemology: as a society funding and participating in knowledge-distribution, which knowledge should we focus our ‘public-making’ and communication efforts on, and why? Institutionalised science communication initiatives assume that scientific knowledge should hold a special, privileged place in public communication. Perhaps this is right, but not simply on the grounds that “science is more reliable”. There needs to be a better reason. Without one, it’s simply unjust.

Contact details: fabien.medvecky@otago.ac.nz

References

Commission, T. T. E. (2016). Performance-Based Research Fund (PBRF) User Manual. Wellington, New Zealand: Tertiary Education Commission.

DIISRTE. (2009). Inspiring Australia: A national strategy for engagement with the sciences.  Canberra: Commonwealth of Australia.

DIUS. (2008). A vision for Science and Society: A consultation on developing a new strategy for the UK: Department for Innovation, Universities, and Skills London.

Matheson, J., & Chock, V. J. (2019). Science Communication and Epistemic Injustice. SERRC, 8(1).

MBIE. (2014). A Nation of Curious Minds: A national strategic plan for science in society.  Wellington: New Zealand Government.

Medvecky, F. (2017). Fairness in Knowing: Science Communication and Epistemic Justice. Science and engineering ethics. doi: 10.1007/s11948-017-9977-0

Nowotny, H. (2003). Democratising expertise and socially robust knowledge. Science and Public Policy, 30(3), 151-156. doi: 10.3152/147154303781780461

Rudolph, J. L. (2014). Why Understanding Science Matters:The IES Research Guidelines as a Case in Point. Educational Researcher, 43(1), 15-18. doi: 10.3102/0013189×13520292

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “Human Nature in the Post-Truth Age.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 36-38.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45C

Image by Bryan Ledgard via Flickr / Creative Commons

 

We have come a long way since Leslie Stevenson published Seven Theories of Human Nature in 1974. Indeed, Stevenson’s critical contribution enlisted the views of Plato, Christianity, Marx, Freud, Sartre, Skinner, and Lorenz to analyze and historically contextualize what the term could mean.

By 2017, a seventh edition is available, now titled Thirteen Theories of Human Nature, and it contains chapters on Confucianism, Hinduism, Buddhism, Plato, Aristotle, the Bible (instead of Christianity), Islam, Kant, Marx, Freud, Sartre, Darwinism, and feminism (with the help of David Haberman, Peter Matthews, and Charlotte Witt). One wonders how many more theories or views can be added to this laundry list; perhaps with an ever-increasing list of contributors to the analysis and understanding of human nature, a new approach might be warranted.

How The Question Is Contested Today

This is where Maria Kronfeldner’s What’s Left of Human Nature? A Post-Essentialist, Pluralist, and Interactive Account of a Contested Concept (2018) enters the scene. This scene, to be sure, is fraught with sexism and misogyny, speciesism and racism, and an unfortunately long history of eugenics around the world. The recent white supremacist eruptions under president Trump’s protection if not outright endorsement are so worrisome that any level-headed (or Kronfeldner’s analytic) guidance is a breath of fresh air, perhaps an essential disinfectant.

Instead of following the rhetorical vitriol of right-wing journalists and broadcasters or the lame argumentations of well-meaning but ill-informed sociobiologists, we are driven down a philosophical path that is scholarly, fair-minded, and comprehensive. If one were to ask a naïve or serious question about human nature, this book is the useful, if at time analytically demanding, source for an answer.

If one were to encounter the prevailing ignorance of politicians and television or radio pundits, this book is the proper toolkit from which to draw sharp tools with which to dismantle unfounded claims and misguided pronouncements. In short, in Trump’s post-truth age this book is indispensable.

But who really cares about human nature? Why should we even bother to dissect the intricacies of this admittedly “contested concept” rather than dispense with it altogether? Years ago, I confronted Robert Rubin (former Goldman Sachs executive and later Treasury Secretary in the Clinton Administration) in a lecture he gave after retirement about financial policies and markets. I asked him directly about his view of human nature and his response was brief: fear and greed.

I tried to push him on this “view” and realized, once he refused to engage, that this wasn’t a view but an assumption, a deep presupposition that informed his policy making, that influenced everything he thought was useful and even morally justifiable (for a private investment bank or the country as a whole). All too often we scratch our heads in wonder about a certain policy that makes no sense or that is inconsistent with other policies (or principles) only to realize that a certain pre-commitment (in this sense, a prejudice) accompanies the proposed policy.

Would making explicit presuppositions about human nature clarify the policy or at least its rationale? I think it would, and therefore I find Kronfeldner’s book fascinating, well-argued, and hopefully helpful outside insulated academic circles. Not only can it enlighten the boors, but it could also make critical contributions to debates over all things trans (transhumanism, transgenderism).

Is the Concept of Essence Useful Anymore?

In arguing for a post-essentialist, pluralist, and interactive account of human nature, Kronfeldner argues for eliminating the “concept of an essence,” broadening its conceptual reach with corresponding “three different kinds” of human nature, and that “nature and culture interact at the developmental, epigenetic, and evolutionary levels” as well as the ongoing “explanatory looping effects” of human nature. (xv)

Distinguishing between explaining human nature and human nature, the author has chosen to focus on the latter “which is an analytic and reflective issue about what ‘having a nature’ and ‘something being due to nature’ mean.” (xvi) Instead of summarizing the intricacies of all the arguments offered in the book, suffice here to highlight, from the very beginning of the book, one of the author’s cautionary remarks: “Many consider the concept of human nature to be obsolete because they cannot envision such an interactive account of the fixity aspect. It is one of the major contributions of this book to try to overcome this obstacle.” (xvii)

And indeed, this book does overcome the simple binary of either there are fixed traits of humanity to which we must pay scientific tribute or there are fluid feedback loops of influence between nature and nurture to which we must pay social and moral attention. Though the former side of the binary is wedded to notions of “specificity, typicality, fixity, and normalcy” for all the right ethical reasons of protecting human rights and equal treatment, the price paid for such (linguistic and epistemic) attachment may be too high.

The price, to which Kronfeldner returns in every chapter of the book, is “dehumanization”—the abuse of the term (and concept) human nature in order to exclude rather than include members of the human species.

In her “eliminativist perspective” with respect to the concept of human nature, Kronfeldner makes five claims which she defends brilliantly and carefully throughout the book. The first relates to how little the “sciences” would lose from not using the term anymore; the second is that getting rid of essentialism alone will not do away with dehumanization; the third suggests that though dehumanization may not be eliminated, post-essentialism will be helpful to “minimize” it; the fourth claim is that “the question about elimination versus revision of the terminology used is actually a matter of values (rather than facts)”; and the fifth claim relates to the “precautionary principle” advocated here. (231)

The upshot of this process of elimination in the name of reducing dehumanization is admittedly as much political as epistemic, social and cultural as moral. As Kronfeldner says: “Even if one gets rid of all possible essentialist baggage attached to human nature talk, and even if one gets rid of all human nature talk whatsoever, there is no way to make sure that the concept of being or becoming human gets rid of dehumanization. Stripping off essentialism and the language inherited from it won’t suffice for that.” (236) So, what will suffice?

Throwing the Ladder Away

At this juncture, Kronfeldner refers to Wittgenstein: “The term human nature might well be a Wittgensteinian ladder: a ladder that we needed to arrive where we are (in our dialectic project) but that we can now throw away.” (240) This means, in short, that “we should stop using the term human nature whenever possible.” (242) Easier said than done?

The point that Kronfeldner makes repeatedly is that simply revising the term or using a different one will not suffice; replacing one term with another or redefining the term more carefully will not do. This is not only because of the terminological “baggage” to which she alludes, but perhaps, more importantly, because this concept or term has been a crutch scientists and policy makers cannot do without. Some sense of human nature informs their thinking and their research, their writing and policy recommendations (as my example above illustrates).

In a word, is it possible to avoid asking: what are they thinking about when they think of human conduct? What underlying presuppositions do they bring to their respective (subconscious?) ways of thinking? As much as we may want to refrain from talking about human nature as an outdated term or a pernicious concept that has been weaponized all too often in a colonial or racist modality, it seems to never be far away from our mind.

In the Trumpist age of white supremacy and the fascist trajectories of European nationalism, can we afford to ignore talk about human nature? Worst, can we ignore the deliberate lack of talk of human nature, seeing, as we do, its dehumanizing effects? With these questions in mind, I highly recommend spending some time with this book, ponderous as it may seem at times, and crystal clear as it is at others. It should be considered for background information by social scientists, philosophers, and politicians.

Contact details: rsassowe@uccs.edu

References

Kronfeldner, Maria. What’s Left of Human Nature? A Post-Essentialist, Pluralist, and Interactive Account of a Contested Concept. Boston: MIT Press, 2018.

Author Information: Matthew R. X. Dentith, Institute for Research in the Humanities, University of Bucharest, m.dentith@episto.org.

Dentith, Matthew R. X. “Between Forteana and Skepticism: A Review of Bernard Wills’ Believing Weird Things.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 48-52.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-43y

Image by David Grant via Flickr / Creative Commons

 

Sometimes, when it is hard to review a book, it is tempting to turn in some kind of personal reflection, one demonstrates why the reviewer felt disconnected from the text they were reviewing. This review of Bernard N. Wills Believing Weird Things – which I received three months ago, and have spent quite a bit of time thinking about in the interim – is just such a review-cum-reflection, because I am not sure what this book is about, nor who its intended audience is.

According to the blurb on the back Believing Weird Things is a response to Michael Shermer’s Why People Believe Weird Things (Henry Holt and Company, 1997). Shermer’s book is one I know all too well, having read and reread it when I started work on my PhD. At the time the book was less than ten years old, and Shermer and his cohort of Skeptics (spelt with a ‘K’ to denote that particular brand of sceptical thought popular among (largely) non-philosophers in the U.S.) were considered to be the first and final word on the rationality (more properly, the supposed irrationality) of belief in conspiracy theories.

Given I was working on a dissertation on the topic, getting to grips with the arguments against belief in such theories seemed crucial, especially given my long and sustained interest in the what you might call the contra-philosophy of Skepticism, the work of Charles Fort.

Times for the Fortean

Fort (who Wills mentions in passing) was a cantankerous collector and publisher of strange and inconvenient phenomena. His Book of the Damned (Boni and Liveright, 1919) is an early 20th Century litany of things which seemed to fall outside the systemic study of the world. From rains of frogs, to cities floating in the sky, Fort presented the strange and the wonderful, often without comment. When he did dare to theorise about the phenomena he cataloged, he often contradicted his previous theories in favour of new ones. Scholars of Fort think his lack of a system was quite deliberate: Fort’s damned data was meant to be immune to scientific study.

Fort was hardly a known figure in his day, but his work has gained fans and adherents, who call themselves Forteans and engage in the study of Forteana. Forteans collect and share damned data, from haunted physics laboratories, to falls of angel hair. Often they theorise about what might cause these phenomena, but they also often don’t dispute other interpretations of the same ‘damned data.’

John Keel, one of the U.S.’s most famous Forteans (and who, if he did not invent the term ‘Men in Black’ at least popularised their existence), had a multitude of theories about the origin of UFOs and monsters in the backwoods of the U.S., which he liberally sprinkled throughout his works. If you challenged Keel on what you thought was an inconsistency of thought he would brush it off (or get angry at the suggestion he was meant to consistent in the first place).

I was a fan of Forteana without being a Fortean: I fail the Fortean test of tolerating competing hypotheses, preferring to stipulate terms whilst encouraging others to join my side of the debate. But I love reading Forteana (it is a great source of examples for the social epistemologist), and thinking about alternative interpretations. So, whilst I do not think UAP (unexpected aerial phenomena – the new term for UFO) are creatures from another dimension, I do like thinking about the assumptions which drive such theories.

Note here that I say ‘theories’ quite deliberately: any student of Forteana will quickly become aware that modern Forteans (contra Fort himself) are typically very systematic about their beliefs. It is just that often the Fortean is happy to be a systemic pluralist, happily accepting competing or complimentary systems as equally possible.

Weird and Weirder

Which brings me back to Believing Weird Things. The first section concerns beliefs people like Shermer might find weird but Wills argues are reasonable in the context under which they developed. Wills’ interest here is wide, taking in astrology, fairies, and why he is not a Rastafarian. Along the way he contextualises those supposedly weird beliefs and shows how, at certain times or in certain places, they were the product of a systemic study of the world.

Wills points out that a fault of Skepticism is a lack of appreciation for history: often what we now consider rational was once flimflam (plate tectonics), and what was systemic and rational (astrology) is today’s quackery. As Wills writes:

The Ancients do not seem to me to be thinking badly so much as thinking in an alien context and under different assumptions that are too basic to admit evaluation in the ordinary empirical sense (which is not to say they admit of no evaluation whatsoever). Further, there are many things in Aristotle and the Hebrew Bible which strike me as true even though the question of ‘testing’ them scientifically and ‘skeptically’ is pretty much meaningless. In short, the weird beliefs I study are at minimum intelligible, sometimes plausible and occasionally true. [4]

Indeed, the very idea which underpins Shermer’s account, ‘magical thinking,’ seems to fail the skeptical test: why, like Shermer, would you think it is some hardwired function rather than culturally situated? But more importantly, how is magical thinking any different from any other kind of thinking?

This last point is important because, as others have argued (including myself) many beliefs people think are problematic are, when looked at in context with other beliefs, either not particularly problematic, or no more problematic than the beliefs we assume are produced rationally. The Psychology of Religion back in the early 20th Century is a good example of this: when psychologists worried about religious belief started looking at the similarities in belief formation between the religious and the non-religious, they started to find the same kind of ‘errors’ in irreligious people as well.

In the same respect, the work in social psychology on belief in conspiracy theories seems to be suffering the same kind of problem today: it’s not clear that conspiracy theorists are any less (or more) rational than the rest of us. Rather, often what marks out the difference in belief are the different assumptions about how the world is, or how it works. Indeed, as Wills writes:

Many weird ideas are only weird from a certain assumed perspective. This is important because this assumed perspective is often one of epistemic and social privilege. We tend to associate weird ideas with weird people we look down upon from some place of superior social status. [10]

The first section of Believing Weird Things is, then, possibly the best defence of a kind of Fortean philosophy one could hope for. Yet that is also an unfair judgement, because thinking of Believing Weird Things as a Fortean text is just my imposition: Fort is mentioned exactly once, and only in a footnote. I am only calling this a tentatively Fortean text because I am not sure who the book’s audience is. Ostensibly – at least according to the blurb – it is meant to be a direct reply to Shermer’s Why People Believe Weird Things. But if it is, then it is twenty years late: Why People Believe Weird Things was published in 1997.

Not just that, but whilst Believing Weird Things deals with a set of interesting issues Shermer did not cover (yet ought to have), almost everything which makes up the reply to Why People Believe Weird Things is to be found in the Introduction alone. Now, I’d happily set the Introduction as a reading in a Critical Thinking class or elementary Epistemology class. However, I could not see much use in setting the book as a whole.

What’s Normal Anyway?

Which brings us to the second half of Believing Weird Things. Having set out why some weird beliefs are not that weird when thought about in context, Wills sets out his reasons for thinking that beliefs which aren’t – in some sense – considered weird ought to be. The choice of topics here is interesting, covering Islamophobia, white privilege, violence and the proper attitude towards tolerance and toleration in our polities.

But it invites the question (again) of who his intended audience is meant to be? For example, I also think Islamophobia, racism, and violence are deeply weird, and it worries me that some people still think they are sensible responses. But if Wills is setting out to persuade the other half of the debate, the racists, the bigots, and the fans of violence, then I do not think he will have much luck, as his discussions never seem to get much further than “Here are my reckons!”

And some of those reckons really need more arguments in favour of them.

For example, Wills brings out the old canard that religious beliefs and scientific beliefs are one and the same (presented as ‘religious faith’ and ‘scientific faith’). Not just that, but, in chapter 6, he talks about the things ‘discovered’ by religion. These are presented as being en par with discoveries in the sciences. Yet aren’t the things discovered by religion (‘humans beings must suffer before they learn. … existence is suffering’ [48]) really the ‘discoveries’ of, say, philosophers working in a religious system? And aren’t many of these discoveries just stipulations, or religious edicts?

This issue is compounded by Wills specification that the process of discovery for religious faith is hermeneutics: the interpretation of religious texts. But that invites even more questions: if you think the gods are responsible for both the world and certain texts in the world you could imagine hermeneutic inquiry to be somehow equivalent to scientific inquiry, but if you are either doubtful of the gods, or doubtful about the integrity of the gods’ prophets, then there is much room to doubt there is much of a connection at all between ‘faith’ in science and faith in scripture.

Another example: in chapter 8, Wills states:

Flat-Earthers are one thing but Birthers, say, are quite another: some ideas do not come from a good place and are not just absurd but pernicious. [67]

Now, there is an argument to be had about the merits (or lack thereof) of the Flat Earth theory and the thesis Barack Obama was not born in the U.S. Some might even claim that the Flat Earth theory is worse, given that belief might entail thinking a lot of very disparate institutions, located globally, are in on a massive cover-up. The idea Barack Obama is secretly Kenyan has little effect on those of us outside the U.S. electoral system.

None of this is to say there aren’t decent arguments to be had about these topics. It is, instead, to say that often these positions are stipulated. As such, the audience for Believing Weird Things seems to be people who agree with Wills, rather than an attempt by Wills to change hearts and minds.

How to Engage With Weird Beliefs

Which is not to say that the second half of the book lacks merit; it just lacks meat. The chapters on Islamophobia (chapter 8) and racism (chapter 9) are good: the contextualisation of both Islamophobia and the nature of conflicts in the Middle East are well expressed. But they are not particularly novel (especially if you read the work of left-wing commentators). But even if the chapters are agreeable to someone of a left-wing persuasion, all too often the chapters just end: the chapter on violence (chapter 10), for example, has no clear conclusion other than that violence is bad.

Similarly confused is the chapter on tolerance (chapter 11). But the worst offender is the chapter on the death of Conservatism (chapter 14). This could have been an interesting argument about the present state of today’s politics. But the chapter ends abruptly, and with it, the book. There is no conclusion, no tying together of threads. There’s hardly even any mention of Shermer or skepticism in the second half of Believing Weird Things.

Which brings us back to the question: who is this book for? If the book were just the first half it could be seen as both a reply to Shermer and a hesitant stab at a Fortean philosophy. But the second half of the book comes across more as the author’s rumination on some pertinent social issues of the day, and none of that content seems to advance far beyond ‘Here are my thoughts…’

Which, unfortunately, is also the character of this review: in trying to work out who the book is for I find my thoughts as inconclusive as the text itself. None of this is to say that Believing Weird Things is a bad or terrible book. Rather, it is just a collection of the author’s ruminations. So, unless you happen to be a fan of Wills, there is little to this text which substantially advances the debate over belief in anything.

Contact details: m.dentith@episto.org

References

Fort, Charles. The Book of the Damned, Boni and Liveright, 1919

Shermer, Michael. Why People Believe Weird Things, Henry Holt and Company, 1997

Wills, Bernard N. Believing Weird Things, Minkowski Institute Press, 2018

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “Imagining a Different Political Economy.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 7-11.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40v

Image by Rachel Adams via Flickr / Creative Commons

 

One cannot ask for a kinder or more complimentary reviewer than Adam Riggio.[1] His main complaint about my book, The Quest for Prosperity, is that “Stylistically, the book suffers from a common issue for many new research books in the humanities and social sciences. Its argument loses some momentum as it approaches the conclusion, and ends up in a more modest, self-restrained place than its opening chapters promised.”

My opening examination of what I see as the misconceptions of some presuppositions used in political economy is a first, necessary step towards an examination of recent capitalist variants (that are heralded as the best prospects for future organization of market exchanges) and for a different approach tor political economy offered by the end of the book. Admittedly, my vision of a radically reframed political economy that exposes some taken for granted concepts, such as scarcity, human nature, competition, and growth is an ambitious task, and perhaps, as Riggio suggests, I should attempt a more detailed articulation of the economy in a sequel.

However, this book does examine alternative frameworks, discusses in some detail what I consider misguided attempts to skirt the moral concerns I emphasize so as to retain the basic capitalist framework, and suggests principles that ought to guide a reframed political economy, one more attentive to the moral principles of solidarity and cooperation, empathy towards fellow members of a community, and an mindful avoidance of grave inequalities that are not limited to financial measures. In this respect, the book delivers more than is suggested by Riggio.

On Questions of Character

Riggio also argues that my

templates for communitarian alternatives to the increasingly brutal culture of contemporary capitalism share an important common feature that is very dangerous for [my] project. They are each rooted in civic institutions, material social structures for education, and socialization. Contrary to how [I] spea[k] of these four inspirations, civil rights and civic institutions alone are not enough to build and sustain a community each member of whom holds a communitarian ethical philosophy and moral sense deep in her heart.

This, too, is true to some extent. Just because I may successfully convince you that you are working with misconceptions about human nature, scarcity, and growth, for example, you may still not modify your behavior. Likewise, just because I may offer brilliant exemplars for how “civil rights and civic institutions” should be organized and legally enshrined does not mean that every member of the community will abide by them and behave appropriately.

Mean-spirited or angry individuals might spoil life for the more friendly and self-controlled ones, and Riggio is correct to point out that “a communitarian ethical philosophy and moral sense deep in [one’s] heart” are insufficient for overcoming the brutality of capitalist greed. But focusing on this set of concerns (rather than offering a more efficient or digitally sophisticated platform for exchanges), Riggio would agree, could be good starting points, and might therefore encourage more detailed analyses of policies and regulation of unfettered capitalist practices.

I could shirk my responsibility here and plead for cover under the label of a philosopher who lacks the expertise of a good old-fashioned social scientist or policy wonk who can advise how best to implement my proposals. But I set myself up to engage political economy in all its manifold facets, and Riggio is correct when he points out that my “analysis of existing institutions and societies that foster communitarian moralities and ethics is detailed enough to show promise, but unfortunately so brief as to leave us without guidance or strategy to fulfill that promise.”

But, when critically engaging not only the latest gimmicks being proposed under the capitalist umbrella (e.g., the gig economy or shared economies) but also their claims about freedom and equal opportunity, I was concerned to debunk pretenses so as to be able to place my own ideas within an existing array of possibilities. In that sense, The Quest for Prosperity is, indeed, more critique than manual, an immanent critique that accounts for what is already being practiced so as to point out inevitable weaknesses. My proposal was offered in broad outlines in the hope of enlisting the likes of Riggio to contribute more details that, over time, would fulfill such promises in a process that can only be, in its enormity, collaborative.

The Strength of Values

Riggio closes his review by saying that I

offered communitarian approaches to morality and ethics as solutions to those challenges of injustice. I think his direction is very promising. But The Quest for Prosperity offers only a sign. If his next book is to fulfill the promise of this one, he must explore the possibilities opened up by the following questions. Can communitarian values overcome the allure of greed? What kind of social, political, and economic structures would we need to achieve that utopian goal?

To be clear, my approach is as much Communitarian as it is Institutionalist, Marxist and heterodox, Popperian and postmodern; I prefer the more traditional terms socialism and communism as alternatives to capitalism in general and to my previous, more sanguine appeal to the notion of “postcapitalism.”

Still, Riggio hones in on an important point: since I insist on theorizing in moral and social (rather than monetary) terms, and since my concern is with views of human nature and the conditions under which we can foster a community of people who exchange goods and services, it stands to reason that the book be assessed in an ethical framework as well, concerned to some degree with how best to foster personal integrity, mutual empathy, and care. The book is as much concerned with debunking the moral pretenses of capitalism (from individual freedom and equal opportunity to happiness and prosperity, understood here in its moral and not financial sense) as with the moral underpinnings (and the educational and social institutions that foster them) of political economy.

In this sense, my book strives to be in line with Adam Smith’s (or even Marx’s) moral philosophy as much as with his political economy. The ongoing slippage from the moral to the political and economic is unavoidable: in such a register the very heart of my argument contends that financial strategies have to consider human costs and that economic policies affect humans as moral agents. But, to remedy social injustice we must deal with political economy, and therefore my book moves from the moral to the economic, from the social to the political.

Questions of Desire

I will respond to Riggio’s two concluding questions directly. The first deals with overcoming the allure of greed: in my view, this allure, as real and pressing as it is, remains socially conditioned, though perhaps linked to unconscious desires in the Freudian sense. Within the capitalist context, there is something more psychologically and morally complex at work that should be exposed (Smith and Marx, in their different analyses, appreciate this dimension of market exchanges and the framing of human needs and wants; later critics, as diverse as Herbert Marcuse and Karl Polanyi, continue along this path).

Wanting more of something—Father’s approval? Mother’s nourishment?—is different from wanting more material possessions or money (even though, in good a capitalist modality, the one seeps into the other or the one is offered as a substitute for the other). I would venture to say that a child’s desire for candy, for example, (candy being an object of desire that is dispensed or withheld by parents) can be quickly satiated when enough is available—hence my long discussion in the book about (the fictions of) scarcity and (the realities of) abundance; the candy can stand for love in general or for food that satisfies hunger, although it is, in fact, neither; and of course the candy can be substituted by other objects of desire that can or cannot be satisfied. (Candy, of course, doesn’t have the socially symbolic value that luxury items, such as iPhone, do for those already socialized.)

Only within a capitalist framework might one accumulate candy not merely to satisfy a sweet tooth or wish for a treat but also as a means to leverage later exchanges with others. This, I suggest, is learned behavior, not “natural” in the classical capitalist sense of the term. The reason for this lengthy explanation is that Riggio is spot on to ask about the allure of greed (given his mention of demand-side markets), because for many defenders of the faith, capitalism is nothing but a large-scale apparatus that satisfies natural human appetites (even though some of them are manufactured).

My arguments in the book are meant not only to undermine such claims but to differentiate between human activities, such as exchange and division of labor (historically found in families and tribes), and competition, greed, accumulation, and concentration of wealth that are specific to capitalism (and the social contract within which it finds psychological and legal protection). One can see, then, why I believe the allure of greed can be overcome through social conditioning and the reframing of human exchanges that satisfy needs and question wants.

Riggio’s concern over abuse of power, regardless of all the corrective structures proposed in the book, deserves one more response. Indeed, laws without enforcement are toothless. But, as I argue throughout the book, policies that attempt to deal with important social issues must deal with the economic features of any structure. What makes the Institutionalist approach to political economy informative is not only the recognition that economic ideals take on different hues when implemented in different institutional contexts, but that economic activity and behavior are culturally conditioned.

Instead of worrying here about a sequel, I’d like to suggest that there is already excellent work being done in the areas of human and civil rights (e.g., Michelle Alexander’s The New Jim Crow (2010) and Matthew Desmond’s Evicted (2016) chronicle the problems of capitalism in different sectors of the economy) so that my own effort is an attempt to establish a set of (moral) values against which existing proposals can be assessed and upon which (economic) policy reform should be built. Highlighting the moral foundation of any economic system isn’t a substitute for paying close attention to the economic system that surrounds and perhaps undermines it; rather, economic realities test the limits of the applicability of and commitment to such foundation.

Contact details: rsassowe@uccs.edu

References

Riggio, Adam. “The True Shape of a Society of Friends.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 40-45.

Sassower, Raphael. The Quest for Prosperity. London, UK: Rowman & Littlefield, 2017.

[1] Special thanks to Dr. Denise Davis for her critical suggestions.

Author Information: Raimo Tuomela, University of Helsinki, raimo.tuomela@helsinki.fi

Tuomela, Raimo. “The Limits of Groups: An Author Replies.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 28-33.

The pdf of the article refers to specific page numbers. Shortlink: https://wp.me/p1Bfg0-3QM

Please refer to:

In their critique Corlett and Strobel (2017) discuss my 2013 book Social Ontology and comment on some of my views. In this reply I will respond to their three central criticisms that I here formulate as follows:[1]

(1) Group members are said in my account to be required to ask for the group’s, thus other members’, permission to leave the group, and this seems to go against the personal moral autonomy of the members.

(2) My account does not focus on morally central matters such as personal autonomy, although it should.

(3) My moral notions are based on a utilitarian view of morality.

In this note I will show that claims (1) – (3) are not (properly) justified on closer scrutiny.

Unity Is What’s Missing In Our Lives

Below I will mostly focus on we-mode groups, that is, groups based on we-thinking, we-reasoning, a shared “ethos”, and consequent action as a unified group.[2] Ideally, such we-mode groups are autonomous (externally uncoerced) and hence free to decide about the ethos (viz. the central goals, beliefs, norms, etc.) of their group and to select its position holders in the case of an organized group. Inside the group (one with freely entered members) each member is supposed to be “socially” committed to the others to perform her part of the joint enterprise. (Intentions in general involve commitment to carry out what is intended).

The members of a we-mode group should be able to count on each other not to be let down. The goal of the joint activity typically will not be reached without the other members’ successful part performances (often involving helping). When one enters a we-mode group it is one’s own choice, but if the others cannot be trusted the whole project may be impossible to carry out (think of people building a bridge in their village).

The authors claim that my moral views are based on utilitarianism and hence some kind of maximization of group welfare instead of emphasizing individual autonomy and the moral rights of individuals.[3] This is a complex matter and I will here say only that there is room in my theory both for group autonomy and individual autonomy.  The we-mode account states what it takes for people to act in the we-mode (see Tuomela, 2013, ch. 2). According to my account, the members have given up part of their individual autonomy to the group. From this follows that solidarity to the other members is important. The members of a paradigmatic we-mode group should not let the others down. This is seen as a moral matter.

The Moral Nature of the Act

As to the moral implications of the present approach, when a group is acting intentionally it is as a rule responsible for what it does. But what can be said about the responsibility of a member? Basically, each member is responsible as a group member and also privately morally responsible for the performance of his part. (He could have left the group or expressed his divergent opinion and reasons.) Here we are discussing the properly moral and not only the instrumental or quasi-moral implications of group action and the members.[4]

A member’s exiting a free (autonomous) group is in some cases a matter for the group to deal with. “What sanctions does a group need for quitting members if it endangers the whole endeavor?” Of course the members may exit the group but then they have to be prepared to suffer the (possibly) agreed-upon sanctions for quitting. Corlett and Strobel focus on the requirement of a permission to leave the group (see pp. 43-44 of Tuomela, 2013). It is up to the group to decide about suitable sanctions. E.g. the members may be expected to follow the majority here. (See ch. 5 of Tuomela, 2013).

Furthermore, those who join the group should of course be clear about what kind of group they are joining. If they later on wish to give up their membership they can leave upon taking on the sanctions, if any, that the group has decided upon. My critics rightfully wonder about the expression “permission to leave the group”. My formulations seem to have misleadingly suggested to them that the members are (possibly) trapped in the we-mode group. Note that on p. 44 of my 2013 book I speak of cases where leaving the group harms the other members and propose that sometimes rather mere informing the members might be appropriate.

How can “permission from the group” be best understood? Depending on the case at hand, it might involve asking the individual members if they allow the person in question to leave without sanctions. But this sounds rather silly especially in the case of large groups. Rather, the group may formulate procedures for leaving the group. This would involve institutionalizing the matter and the possible sanctioning system. In the case of paradigmatic autonomous we-mode groups the exit generally is free in the sense that the group itself rather than an external authority decides about procedures for exiting the group (see appendix 1 to chapter 2 of Tuomela, 2013). However, those leaving the group might have to face group-based sanctions if they by their leaving considerably harm the others.

In my account the members of a well-functioning we-mode group can be said somewhat figuratively to have given up part of their autonomy and self-determination to their we-mode group. Solidarity between the members is important: The members should not let the others down – or else the group’s project (viz. the members’ joint project) will not be successful. This is a non-utilitarian moral matter – the members are to keep together not to let each other down. Also for practical reasons it is desirable that the members stick together on penalty of not achieving their joint goal – e.g. building a bridge in their village.

People do retain their personal (moral) autonomy in the above kind of cases where entering and exiting a we-mode group is free (especially free from external authorities) or where, in some cases, the members have to satisfy special conditions accepted by their group. I have suggested elsewhere that dissenting members should either leave the group or try to change the ethos of the group. As said above, in specific cases of ethos-related matters the members may use a voting method, e.g. majority voting, even if the minority may want to challenge the result.[5]

Questions of Freedom

According to Corlett and Strobel, freedom of expression is largely blocked and the notion of individual autonomy is dubious in my account (see p. 9 of their critical paper). As was pointed out above, the members may leave the group freely or via an agreed-upon procedure. Individual autonomy is thwarted to the extent that is needed for performing one’s part, but such performance is the whole point of participation in the first place. Of course the ethos may be discussed along the way and changes may be introduced if the members or e.g. the majority of them or another “suitable” number of them agree. The members enter the group freely, by their own will and through the group’s entrance procedures and may likewise leave the group through collectively agreed-on procedures (if such exist).

As we know, autonomy is a concept much used in everyday life, outside moral philosophy. In my account it is used in “autonomous groups”, in the simple sense that the group can make its own decisions about ethos, division of tasks, conditions for entering and exiting the group without coercion by an external authority. Basically, only the autonomous we-mode group can, through its members’ decision, make rules for how people are allowed to join or leave the group.[6]

Corlett’s and Strobel’s critique that the members in autonomous we-mode groups have no autonomy (in the moral sense) in my account cannot be directed towards the paradigmatic case of groups with free entrance, where the group members decide among themselves what is to be done by whom and how to arrange for the situation of a member wanting to leave the group, maybe in the middle of a critical situation. Of course, a member cannot always do as he chooses in situations of group action. A joint goal is at stake and one’s letting the others down when they have a good reason to count on one would be detrimental to everyone’s goal achievement. Also, letting the others down is at least socially and morally condemnable.

When people have good reason to drop out, having changed their mind or finding that the joint project is morally dubious, they can exit according to the relevant rules (if such exist in the group). The feature criticized by the present authors that “others’ permission is required” is due to my unlucky formulation. What is meant is that in some cases there should be some kind of procedure in the group for leaving. The group members are socially committed to each other to further the ethos, as well as committed to the ethos. The social commitment has, of course, the effect that each member looks to the others for cooperative actions and attitudes and has a good reason to do so.

My critics suggest that the members should seek support from the others – indeed this seems to be what the assumed solidarity of we-mode groups can be taken to provide. However, what they mean could be a procedure to make the ethos more attractive to them and leading to their renewed support of the ethos, instead of pressuring them to stay in a group with an ethos that no longer interests them. Of course, the ethos may be presented in new ways, but there still may be situations where members want to leave and they have a right to leave following the agreed upon procedures. Informing the group in due time, so that the group can find compensating measures, is what a member who quits can and should minimally do. The authors discuss examples where heads of states and corporations want to resign. It is typically possible to resign according e.g.to the group’s exit rules, if such exist.

Follow the Leader

On page 11 the authors criticize the we-mode account for the fact that non-operative members ought to accept what the operative leaders decide. They claim that e.g. a state like the U.S., on the contrary, allows, and in some situations, even asks the citizens to protest. They are, of course, right in their claims concerning special cases. Naturally there will sometimes be situations where protest is called for. The dissidents may then win and the government (or what have you) will change its course of action. Even the ethos of the group may sometimes have to be reformulated.

Gradual development occurs also in social groups and organizations, the ethos evolves often through dissident actions. When the authorized operatives act according to what they deem to be a feasible way, they do what they are chosen to do. If non-operatives protest due to immoral actions of the operatives, they do the right thing morally, but if the operatives act according to the ethos, they are doing their job, although they should have chosen a moral way to achieve the goal. The protest of the non-operatives may have an effect. On the other hand, note that even Mafia groups may act in the we-mode and do so in immoral ways, in accordance to their own agenda.

The authors discuss yet another kind of example of exiting the group, where asking permission would seem out of place: a marriage. If a married couple is taken to be a we-mode group, the parties would have to agree upon exit conditions (if marriage is not an institutionalized and codified concept – what it, nevertheless, usually is). As an institution it is regulated in various ways depending on the culture. The summarized critique by the authors on page 12 has been met this far. It seems that they have been fixated on the formulation that “members cannot leave the group without the permission from the other members.” To be sure, my view is that group members cannot just walk out on the others without taking any measures to ease the detrimental effects of their defection. Whether it is permission, compensation or an excuse, depends on the case. In protesting we have a different story: Dissidents often have good reasons to protest, and sometimes they just want to change the ethos instead of leaving.

It’s Your Prerogative

At the end of their critique the authors suggest that I should include in my account a moral prerogative for the members to seek the support of other group members as a courtesy to other members and the group. I have no objection to that. Once more, the expression “permission to leave the group” has been an unfortunate choice of words. It would have been better e.g. to speak of a member’s being required to inform the others that one has to quit and be ready to suffer possible sanctions for letting the others down and perhaps causing the whole project to collapse.

However, dissidents should have the right to protest. Those who volunteer to join a group with a specific ethos cannot always foresee if the ethos allows for immoral or otherwise unacceptable courses of action. Finally, my phrase “free entrance and exit” may have been misunderstood. As pointed out, the expression refers to the right of the members to enter and exit instead of being forced to join a group and remain there. To emphasize once more, it is in this way that the members of we-mode groups are autonomous.  Also, there is no dictator who steers the ethos formation and choice of position holders. However, although the members may jointly arrange their group life freely, each member is not free to do whatever he chooses when he acts in the we-mode. We-mode acting involves solidary collective acting by the members according to the ethos of the group.

In this note I have responded to the main criticisms (1)-(3) by Corlett and Strobel (2017) and argued that they do not damage my theory at least in a serious way. I wish to thank my critics for their thoughtful critical points.

Contact details: raimo.tuomela@helsinki.fi

References

Corlett, A. and Strobel J., “Raimo Tuomela’s Social Ontology”, Social Epistemology 31, no. 6.  (2017): 1-15

Schmid, H.-B. “On not doing one’s part.” Pp. 287-306, in Psarros, N., Schule-Ostermann, K. (eds.) Facets of Sociality. Frankfurt: Ontos Verlag, 2007

Tuomela, R. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford: Stanford University Press, 1995.

Tuomela, R. The Philosophy of Sociality, Oxford: Oxford University Press, 2007.

Tuomela, R. Social Ontology, New York: Oxford University Press, 2013.

Tuomela, R. and Mäkelä, P. “Group agents and their responsibility.” Journal of Ethics 20. (2016): 299-316

Tuomela, R. and Tuomela, M. “Acting As a Group Member and Collective Commitment”, Protosociology 18, (2003): 7-65.

[1] Acknowledgement. I wish to thank my wife Dr. Maj Tuomela for important help in writing this paper.

[2] See Tuomela (2007) and (2013) for the above notions.

[3] I speak of utilities only in game-theoretic contexts. (My moral views are closer to pragmatism and functionalism than utilitarianism.)

[4] See e.g. Tuomela-Mäkelä (2016) for a group’s  and group members’ moral responsibility, Also see pp. 37 and 41 of Tuomela (2013) and chapter 10 in Tuomela (2007).

[5] As to dissidents I have discussed the notion briefly in my 1995 book and in a paper published in 2003 with Maj Tuomela (see the references). Furthermore, Hans Bernhard Schmid discusses dissidents in we-mode groups in his article “On not doing one’s part” in Psarros and Schulte-Ostermann (eds.) Facets of Sociality, Ontos Verlag, 2007, pp. 287-306.

[6] Groups that are dependent on an external agent (e.g. a dictator, an owner of a company or an officer commanding an army unit) may lack the freedom to decide about what they should be doing, which positions they should have, and the members may be forced to join a group that they cannot exit from. My notion of “autonomous groups” refers to groups that are free to decide about their own matters, e.g. entrance and exit (possibly including sanctions). Personal moral autonomy in such groups is retained by the possibility to apply for entrance and exit upon taking on possible sanctions, influencing the ethos or protesting. The upshot is that a person functioning in a paradigmatic we-mode group should obey the possible restrictions that the group has set for exiting the group and be willing to suffer agreed upon sanctions. Such a we-mode group is assumed to have coercion-free entrance to the group and also free exit from it – as specified in Appendix 1 to Chapter 2 of my 2013 book. Here is meant that no external authority is coercing people to join and to remain in the group. A completely different matter is the case of a Mafia group and an army unit. The latter may be a unit that cannot be freely entered and exited. Even in these cases people may act in the we-mode. In some non-autonomous groups, like in a business company, the shareholders decide about all central matters and the workers get paid. Members may enter if they are chosen to join and exit only according to specific rules.

Author Information: Line Edslev Andersen, Vrije Universiteit Brussel, line.edslev.andersen@gmail.com

Andersen, Line Edslev. “Community Beliefs and Scientific Change: Response to Gilbert.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 37-46.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3LJ

Please refer to:

Image credit: NASA Goddard Space Flight Center, via flickr

Margaret Gilbert (2017) has provided an engaging response to my paper on her account of joint commitment and scientific change (Andersen 2017). Based on Donald MacKenzie’s (1999) sociohistory of a famous mathematical proof, my paper offered an argument against her account of why a scientist’s outsider status can be effective in enabling scientific change (Gilbert 2000). On her account, scientists have collective beliefs in the sense of joint commitments to particular beliefs. The term ‘collective belief’ is used in this sense in the present paper.[1] When a group of scientists are jointly committed to some belief, they are obligated not to call it into question. According to Gilbert, this makes joint commitments work as a brake on scientific change and gives outsiders an important role in science. Since outsiders to a given scientific community are party to no or relatively few joint commitments of that community, they are less constrained by them. For this reason, outsiders play a central role in bringing about scientific change.

I argued that Gilbert’s account is inherently difficult to test because it requires data that are hard to interpret. At the same time, I pointed out that we have available a simpler explanation of why a scientist’s outsider status can be effective in enabling scientific change: During their education and training, scientists learn to see things in certain ways. If solving some problem requires one to look at things in a different way, scientists with a different educational background will have an advantage.[2] I have become aware that Melinda Fagan (2011, 255-256) also compares these two explanations.[3]

Gilbert’s response to my paper has two main parts. In the first part (Gilbert 2017, 46-48), she argues for the role of collective beliefs in science by considering the role of collective beliefs in everyday life and in the context of education. I discuss her argument in the next section. In the other part (48-49), she suggests that collective beliefs of scientific communities can help explain why some personal beliefs become deeply entrenched in the minds of scientists. I agree with this. Deborah Tollefsen has made the related suggestion that one could respond to my argument by claiming that the degree to which some personal beliefs are entrenched in the minds of scientists cannot be explained without collective beliefs of scientific communities. This is an interesting suggestion. Here, however, I take a simpler approach to the question of whether scientific communities have collective beliefs.[4]

As mentioned, the remainder of this paper begins with a discussion of Gilbert’s argument (section 1). I then address the question of whether (section 2) and to what extent (section 3) scientific communities in particular (as opposed to, for example, research teams) have collective beliefs. On this basis, I assess the potential of collective beliefs to work as a brake on scientific change (section 4).

Collective Beliefs in Science

On Gilbert’s account, a collective belief is a joint commitment to believe some proposition p (e.g., Gilbert 1987). A joint commitment to believe p is the commitment of a group as one body to believe p; i.e., it is the commitment of a group to emulate, by virtue of the actions of all, a single believer of p. Group members are thus “to speak and act as if they are of ‘one mind’ on the subject” (Gilbert 2017, 46). According to Gilbert, this implies that the joint commitment to believe p is persistent in the sense that it can only be rescinded with the concurrence of all the parties (Gilbert 2014, 118). Each of them has an obligation towards the others to act in accordance with the joint commitment to believe p and not, for example, express contrary beliefs. If someone violates the joint commitment, the others gain the standing to rebuke her and may even ostracize her (Gilbert 2000, 40). By virtue of these features, collective beliefs can act as powerful behavioral constraints and work as a brake on scientific change on Gilbert’s account. Speaking about scientists’ collective beliefs, she thus writes that they can have as far-reaching consequences as “inhibiting one from pursuing spontaneous doubt about the group view, inclining one to ignore evidence that suggests the falsity of that view, and so on” (Gilbert 2000, 44-45).

While collective beliefs can be very persistent, they are quite easily formed. A joint commitment to believe p can be formed without all or most or even any of the group members personally believing p. What matters is that they have expressed their personal willingness to let p stand as a belief of the group—if only tacitly—and this is common knowledge between them. On Gilbert’s (2017, 46-48) account, this happens all the time in everyday life and in the context of education. One of the examples she gives in her response to my paper is that of two people having an informal conversation. One of them says “What a lovely day!” and the other responds “Yes, indeed!” This establishes the belief that it is a lovely day as a collective belief the two have. Gilbert reasons that, if this is all it takes for a collective belief to form, they must play a role in science as well: “If collective beliefs are prevalent in human life generally, and if, in particular, they are the predictable outcome of conversations and discussions on whatever topic, we can expect many collective beliefs to be established among scientists in the various specialties as they talk about their work in small and large groups” (Gilbert 2017, 47).

I agree with Gilbert that, if collective beliefs play this type of role in everyday life, they must play a role in science. I am also convinced by her claim that joint commitments are generally ubiquitous. However, when Gilbert describes how easily collective beliefs are formed in everyday life and in the context of education, she gives examples of smaller groups forming collective beliefs, such as the group of people attending a meeting of a large literary society or a student and a teacher having an interchange. By contrast, when she, I, and others discuss the potential of collective beliefs to work as a brake on scientific change, we are referring to collective beliefs of whole communities. This is relevant, since there seems to be a difference between how easily collective beliefs of smaller groups in science (such as research teams) and scientific communities are established.[5] In fact, I argue below that it is at least rare for scientific communities to form collective beliefs. This is where I disagree with Gilbert.

As the previous work on the potential of collective beliefs to work as a brake on scientific change, the present paper focuses on collective beliefs of whole communities. I thus leave open the possibility that collective beliefs of smaller groups in science can work as a brake of scientific change. However, Hanne Andersen and I have examined the instability of joint commitments of smaller groups and argue that they can rather easily be dissolved (Andersen and Andersen 2017).[6] This limits the potential of collective beliefs of smaller groups in science to work as brakes on scientific change.

The Existence of Community Beliefs

Having explained Gilbert’s account of collective belief, I will now examine the question of whether communities in science have collective beliefs. A consensus established at a consensus development conference is a good candidate for being a collective belief of a scientific community. Paul Thagard (1998a, b, 1999) attended the 1994 consensus conference on methods of diagnosing and treating ulcers as part of his work on the bacterial theory of ulcers.[7] In a later paper, Thagard (2010, 280) addresses Gilbert’s account of collective belief, stating that collective beliefs of scientific communities strike him as “rather rare,” but that he believes consensus conferences establish such collective beliefs. Consensus conferences are themselves rare and in most disciplines non-existent, but in medical research, Thagard explains, “the need for a consensus is much more acute, since hypotheses such as the bacterial theory of ulcers have direct consequences for the treatment of patients” (1998b, 335).

The consensus conference Thagard attended was conducted by the U.S. National Institutes of Health. He describes the purpose of their consensus conferences to be “to produce consensus statements on important and controversial statements in medicine” that are useful to the public and health professionals (1998b, 335).[8] A consensus statement is prepared by a panel of experts after deliberation. Most likely the members of the panel do not all personally agree with everything in the statement given the controversial nature of the subject, but the statement expresses the view that they have agreed to let stand as the view of the panel.[9] In this paper, I assume that when members of a group agree to let a view stand as the view of the group, a joint commitment is involved, so the view of the panel involves a joint commitment. It is, in other words, a collective belief.

The question I am interested in here is whether a consensus statement sometimes expresses not only the collective belief of the consensus development panel, but the collective belief of a whole community of scientists. Let us consider the consensus conference Thagard attended. The consensus development panel was chosen to represent a community—an appropriately delineated medical research community—in the following sense. Its members were chosen by the planning committee whose chair is required to be an authority, “a knowledgeable and prestigious medical figure,” and to be neutral in the sense of ‘not identified with strong advocacy of the conference topic or with relevant research’ (Thagard 1998b, 336). The fourteen people on the consensus development panel were chosen for various kinds of expertise and for their neutrality in the stated sense. Finally, the statement of the panel was based on presentations at the public consensus conference by 22 researchers representing different points of view; contributions from conference attendees during open discussion periods; and closed deliberations within the panel. Sometimes, although apparently not in this case, a draft statement is published online for public comment (e.g., NN 2013, 1).

The first page of the statement tells us that it “provides a ‘snapshot in time’ of the state of knowledge on the conference topic,” implying that these are early times and work remains to be done (NN 1994). This proviso limits the potential of the collective belief expressed in the statement to work as a brake on scientific change, for it must limit the ability of the collective belief to incline scientists to ignore evidence that suggests the falsity of the belief. But the proviso does not lessen the potential of the collective belief for being the collective belief of a whole community. It seems to me plausible to say that the members of the community in question in 1994 expressed their willingness (most of them tacitly) to let a belief stand as the belief of the community at that point in time.[10] This is due to the relative neutrality of the panel, the diversity of the speakers, and the fact that members of the community are given the opportunity to have their voice heard.

A similar point can be made about certain group views that are established in a similar way, but are about something else. I have in mind certain codes for responsible conduct of research. For example, the European Mathematical Society (EMS) introduced a Code of Practice in 2012 (NN 2013, 12). This code may be said to express the view of the EMS in a way similar to how the 1994 consensus statement may be said to express the view of a community of medical researchers. The Code of Practice was prepared by the Ethics Committee of the EMS and approved by the EMS council, which in total consists of about 100 member-elected “delegates from all of the national societies which are members of the EMS” and “delegates representing the individual members of the Society” (www.euro-math-soc.eu/governance). The code will apparently be considered for revision every three years in light of comments received by the chair of the ethics committee from members of the EMS.

While I have addressed the question of whether communities of scientists have collective beliefs, Wray (2007) addresses the broader question of whether they have beliefs in a non-summative sense. When a group believes p in a summative sense it just amounts to all or most of the group members personally believing p. Wray argues that scientific communities, as opposed to research teams, are not capable of having beliefs in a non-summative sense. He uses Emile Durkheim’s distinction between societies characterized by organic solidarity and societies characterized by mechanical solidarity (Wray 2007, 341-342). Wray writes that a group or community is characterized by organic solidarity when its members “depend upon the proper functioning of the other members” (Wray 2007, 342), as the parts of an organism depend on each other, and are organized so as to advance a goal. Groups that are not bound together by organic solidarity, are bound together by similar thoughts and attitudes; by mechanical solidarity. Wray argues that a group must be cohesive in the sense of being characterized by organic solidarity to be capable of having beliefs in a non-summative sense and that scientific specialty communities and the scientific community as a whole are not cohesive in this sense.

If consensus conferences produce group beliefs that can properly be described as beliefs of whole communities, they are strictly speaking inconsistent with Wray’s account. These beliefs would then be produced by community acts characterized by organic solidarity, but such acts seem to be exceptional and do not speak against the claim that communities in science are generally characterized by mechanical solidarity. On Gilbert’s account, the non-summative group beliefs Wray discusses imply joint commitments. Wray’s account is neutral on this question. His is an argument that communities in science do not form non-summative beliefs in general. It thus implies that they do not form joint commitments to beliefs.[11] In the next section, I give an additional argument for this particular conclusion.

The Frequency of Community Beliefs

In their everyday practice, scientists often express a view of a research team they are part of, for example in publications, in conference presentations, or in conversation with other scientists. They less frequently express a view of one of the communities they belong to, regardless of how we conceive ‘community view’ here. It seems to be rather rare that the typical scientist is prompted to say, “We as a scientific community believe…” That the motivation to express group views is relatively low at the community level may suggest that the motivation to actively establish group views at the community level is relatively low as well. However, the following argument does not depend on it.

The argument focuses on collective scientific beliefs of communities, since these are the ones that work as brakes on scientific change on Gilbert’s account. In the next section, I return to the topic of community beliefs about responsible conduct of research. So let us consider a Gilbertian scientific community belief p that has just been formed. p would have to be somehow broadly relevant in the community; the community members must, after all, be aware of a joint commitment to believe p for there to be one such. Furthermore, in order to agree to let a belief stand as the belief of the group, the group members must have some motivation to do so. They must do so as a means to realizing a goal (see Wray 2001). Sometimes the members of a community will be motivated to let a belief stand as the belief of the community although they personally have very different beliefs on the matter. For example, in the above case of the consensus conference, there was a need to present a community belief to the public and health professionals. But this is rare.

If the proposition p is broadly relevant in the community and the community belief that p is “unforced” in the sense that it has not been formed quickly under external pressure from the public or others, experts will have discussed and tested whether p until there is broad agreement among them. If the experts broadly agree that p is well established by the evidence, the other community members are likely to believe p because the experts do. Hence, at the time of being established, the collective belief p will reflect what a large majority of the community members personally believe, except in rare cases similar to the consensus conference example considered above. This fits well with Gilbert’s (2000) account of collective beliefs and scientific change. The negative potential of collective beliefs, as she describes it, is not associated with their being established in spite of recalcitrant evidence, but by their being maintained in spite of recalcitrant evidence discovered later.

This raises the question of how members can be motivated to jointly commit to a belief they already broadly share. There is already a community belief that p (albeit in a summative sense) that can be presented as such to the public and others. But there may be a reason internal to the community for making the joint commitment. Kristina Rolin (2008) raises the general question of how the members of a community are motivated to jointly commit to beliefs. She argues that community members are motivated to jointly commit to background assumptions because individuals can then use these assumptions and remain epistemically responsible even when they do not have the expertise to defend them if they are challenged. They can do so because the joint commitments obligate the relevant experts in the community to defend the assumptions if they are appropriately challenged. As implied by the above, I am unconvinced by Rolin’s premise that all the members of a community would be prepared to jointly commit to the same mere assumption, especially given the obligations and constraints this implies on Gilbert’s account. It is unclear how they would determine which background assumptions to commit to.

That it is unclear if and how community members would be motivated to jointly commit to believe p is a serious challenge to the claim that scientific communities form unforced collective beliefs. I believe the challenge may well be insurmountable. But even if we assume that community members are motivated to (and do) form unforced collective beliefs, we have a problem if we want to establish that such community beliefs work as a brake on scientific change. Recall that p is broadly relevant in the community in addition to being widely believed by the community members to be well established by the evidence. Hence, p expresses the sort of view that would make its way into textbooks for students or young researchers entering the subdiscipline[12] or be used widely in further research. If the unforced collective beliefs of scientific communities have the characteristics of being broadly relevant and widely believed, at least for a while, it will be hard to test whether they work as a brake on scientific change. For much relies on such a belief, so it will be unpleasant for community members if recalcitrant evidence turns up regardless of whether they are jointly committed to the belief. Recalcitrant evidence is thus likely to be met with some skepticism or resistance whether a joint commitment is in place or not. Hence, if joint commitments make it harder than it would already be to abandon such views, it will be hard to detect.

Fagan (2011) defends a similar conclusion. She criticizes certain explanatory arguments that scientific groups have collective beliefs: It is not the case, she argues, that collective beliefs can explain certain phenomena in science—the inertia of science and the stability of groups in science—that cannot be explained just as well by other means.[13]

Community Beliefs: Brakes on Scientific Change?

If my argument is correct, joint commitments of communities in science are rare and without much potential for working as brakes on scientific change. Collective beliefs developed at consensus conferences appear to sometimes be collective beliefs of whole communities. But, as explained above, their potential for working as a brake on science is limited by the fact that they are very rare and that at least some consensus statements come with the proviso that this is our view at this point in time. Codes for responsible conduct of research may also be good candidates for collective beliefs of communities. But if a community has a collective belief on what are responsible research practices, this limits the potential of any collective scientific belief of that community to work as a brake on scientific change. If evidence recalcitrant to the collective scientific belief turns up, the members of the community are forced to violate one of two joint commitments. By ignoring the recalcitrant evidence, they violate their collective belief about responsible research practices, by doing the opposite, they violate their collective scientific belief. In such cases, we have no argument why collective beliefs work as a brake of scientific change by making scientists ignore recalcitrant evidence, unless we can argue that the collective scientific beliefs of a given community are somehow harder to violate than the other joint commitments of the community. Hanne Andersen and I (2017) argue that there are cases in which participants can, due to changes in circumstances, violate a joint commitment without risking rebuke and that this is a major source of instability of collective beliefs and joint commitments in general. We would expect tension between collective beliefs due to changes in circumstances to be another major source of instability of collective beliefs.

If we instead assume that collective beliefs of communities are as ubiquitous as Gilbert claims, norms in general are also collective beliefs (Gilbert 1999). But then norms in the scientific community and scientific subcommunities, such as the norm of sharing counterevidence with one’s colleagues, are also collective beliefs. If members of a scientific community discover evidence that goes against one of their collective scientific beliefs, they are thus forced to violate either the collective scientific belief or a collective belief about responsible conduct of research. The relevant collective belief(s) about responsible conduct of research may be held by the community in question, the scientific community as a whole, or both. Hence, unless it can be argued that the collective scientific beliefs of a given community are harder to violate than other collective beliefs the community members are party to, it is not clear that collective scientific beliefs, even if ubiquitous, will work as a brake on scientific change.

I would like to end by thanking Gilbert for her inspiring work, which I continue to explore.

Acknowledgements: The author thanks K. Brad Wray for helpful feedback on an earlier draft.

References

Andersen, Hanne. “Joint Acceptance and Scientific Change: A Case Study.” Episteme 7, no. 3 (2010): 248–265.

Andersen, Line Edslev. “Outsiders Enabling Scientific Change: Learning from the Sociohistory of a Mathematical Proof.” Social Epistemology 31, no. 2 (2017): 184–91.

Andersen, Line Edslev, and Hanne Andersen. “The Stability and Instability of Joint Commitment.” Submitted. 2017b.

Bird, Alexander. “Social Knowing: The Social Sense of ‘Scientific Knowlegde’.” Philosophical Perspectives 24, no. 1 (2010): 23–56.

Bouvier, Alban. “Individual Belief and Collective Beliefs in Science and Philosophy: The Plural Subject and the Polyphonic Subject Accounts.” Philosophy of the Social Sciences 34, no. 3 (2004): 382–407.

Cheon, Hyundeuk. “In What Sense is Scientific Knowledge Collective Knowledge?” Philosophy of the Social Sciences 44, no. 4 (2014): 407–423.

de Ridder, Jeroen. “Epistemic Dependence and Collective Scientific Knowledge.” Synthese 191, no. 1 (2014): 37–53.

Dragos, Chris. “Which Groups Have Scientific Knowledge? Wray vs. Rolin.” Social Epistemology 30, no. 5–6 (2016a): 611–623.

Dragos, Chris. “Justified Group Belief in Science.” Social Epistemology Review and Reply Collective 5, no. 9 (2016b): 6–12.

Fagan, Melinda Bonnie. “Is There Collective Scientific Knowledge? Arguments from Explanation.” The Philosophical Quarterly 61, no. 243 (2011): 247–269.

Gilbert, Margaret. “Modelling Collective Belief.” Synthese 73, no. 1 (1987): 185–204.

Gilbert, Margaret. “Social Rules: Some Problems for Hart’s Account, and an Alternative Proposal.” Law and Philosophy 18, no. 2 (1999): 141–171.

Gilbert, Margaret. “Collective Belief and Scientific Change.” In Sociality and Responsibility, edited by Margaret Gilbert, 37–49. Lanham: Rowman & Littlefield, 2000.

Gilbert, Margaret. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press, 2014.

Gilbert, Margaret. “Scientists Are People Too: Comment on Andersen.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 45–49.

MacKenzie, Donald. “Slaying the Kraken: The Sociohistory of a Mathematical Proof.” Social Studies of Science 29, no. 1 (1999): 7–60.

NN. “Helicobacter pylori in Peptic Ulcer Disease.” NIH Consensus Statement 12, no. 1 (Feb. 7–9, 1994): 1–22. https://consensus.nih.gov/1994/1994HelicobacterPyloriUlcer094PDF.pdf.

NN, “Diagnosing Gestational Diabetes Mellitus.” NIH Consensus Statement 29, no. 1 (March 4–6, 2013): 1–30. https://consensus.nih.gov/2013/docs/Gestational_Diabetes_Mellitus508.pdf.

NN. “Code of Practice.” Newsletter of the European Mathematical Society 87 (March 2013): 12–15.

Rolin, Kristina. “Science as Collective Knowledge.” Cognitive Systems Research 9, no. 1–2 (2008): 115–124.

Staley, Kent W. “Evidential Collaborations: Epistemic and Pragmatic Considerations in ‘Group Belief’.” Social Epistemology 21, no. 3 (2007): 321–35.

Thagard, Paul. “Ulcers and Bacteria I: Discovery and Acceptance.” Stud. Hist. Phil. Biol. & Biomed. Sci. 29, no. 1 (1998a): 107–136.

Thagard, Paul. “Ulcers and Bacteria II: Instruments, Experiments, and Social Interactions.” Stud. Hist. Phil. Biol. & Biomed. Sci. 29, no. 2 (1998b): 317–342.

Thagard, Paul. How Scientists Explain Disease. Princeton: Princeton University Press, 1999.

Thagard, Paul. “Explaining Economic Crises: Are There Collective Representations?” Episteme 7, no. 3 (2010): 266–283.

Tollefsen, Deborah, and Rick Dale. “Naturalizing Joint Action: A Process-Based Approach.” Philosophical Psychology 25, no. 3 (2012): 385-407.

Tossut, Silvia. “Which Groups Have Scientific Knowledge? A Reply to Chris Dragos.” Social Epistemology Review and Reply Collective 5, no. 7 (2016): 18–21.

Weatherall, James Owen, and Margaret Gilbert. “Collective Belief, Kuhn, and the String Theory Community.” In The Epistemic Life of Groups: Essays in the Epistemology of Collectives, edited by Michael S. Brady and Miranda Fricker, 191–217. Oxford: Oxford University Press, 2016.

Wray, K. Brad. “Collective Belief and Acceptance.” Synthese 129, no. 3 (2001): 319–333.

Wray, K. Brad. “Who Has Scientific Knowledge?” Social Epistemology 21, no. 3 (2007): 335–345.

Wray, K. Brad. “Collective Knowledge and Collective Justification.” Social Epistemology Review and Reply Collective 5, no. 8 (2016): 24–27.

Wray, K. Brad. “The Impact of Collaboration on the Epistemic Cultures of Science.” In Scientific Collaboration and Collective Knowledge, edited by Thomas Boyer-Kassem, Conor Mayo-Wilson, and Michael Weisberg. Forthcoming from Oxford University Press, 2017.

[1] This is not necessarily a fully appropriate term. There has been some debate on whether groups can be jointly committed to beliefs or whether they can merely be jointly committed to accept claims, a debate started by K. Brad Wray (2001). This paper is neutral on this question.

[2] For convenience, I speak of this as the Kuhnian explanation in the paper, but the paper is not intended as a comparison of Gilbert’s account with Kuhn’s account. I do not mean to argue that we should choose Kuhn’s whole theory of scientific change over Gilbert’s theory of scientific change. This is not clearly stated in the paper. I thank Deborah Tollefsen for pointing this out to me. I do think the question, addressed in Weatherall and Gilbert 2016, of how the work of Gilbert relates to that of Kuhn is an important one.

[3] In her paper, they are compared as alternative explanations of the effects of the dogma of reproductive biology that there is no cell renewal in the ovary.

[4] This question was also examined by Rolin 2008. Wray (2007) started a discussion of the general ability of scientific communities to hold views (Rolin 2008; Cheon 2014; Dragos 2016a, b; Tossut 2016; Wray 2016).

[5] For case studies that support the view that smaller groups in science form collective beliefs, see Bouvier 2004; Staley 2007; and Andersen 2010. For a promising approach to test whether joint commitments exist, see Tollefsen and Dale 2012, which gives an account of how empirical research in cognitive science is important to understanding the nature of shared intention.

[6] Gilbert herself has been focusing on the persistence of joint commitments and written very little about the sense in which they lack persistence, but acknowledges that this is an important topic (Gilbert 2014, 32).

[7] Gilbert’s (2000, 47) first paper on the role of collective beliefs in science was prompted by this work.

[8] This work has recently been taken over by others (www.consensus.nih.gov).

[9] One of the phenomena Gilbert tries to make sense of with her account of joint commitment are cases where people have inconsistent beliefs on some matter and nonetheless let a view stand as the view of the group. Kent Staley (2007) shows that the members of a research team can do (and do) this in epistemically rational ways (see also Wray 2017, 118–119). His argument applies equally well to other groups of scientists.

[10] By contrast, Bird 2010, 10, and de Ridder 2014, 41, state that there is no mechanism of Gilbertian community view formation in science.

[11] Hence, I disagree with Hyundeuk Cheon (2014) who argues that Gilbert and Wray speak about two different types of collective belief. On my interpretation, Gilbert and Wray are examining different questions about non-summative belief rather than different types of collective belief: Wray examines what kinds of groups have beliefs in a non-summative sense, while Gilbert examines how groups have beliefs in a non-summative sense (and argues that they do so by virtue of joint commitments).

[12] I mentioned above that scientists rather infrequently express community views, in whatever sense, but they do so in textbooks.

[13] Fagan argues that the existence of collective beliefs of communities is thus hard to test from their consequences. In the previous section, I made a case for the existence of collective beliefs of scientific communities by looking at how consensus is established at consensus conferences. But this is also hard in the case of “non-forced” collective beliefs of communities. It is not clear where we have to look to observe the process by which they are established. They are likely not established at a single event. It is also harder to see whether a group belief is a collective belief when it, judging from the personal beliefs of the group members at the time of its establishment, could just as well be a mere summative belief.