Archives For popular science

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: William Davis, California Northstate University, William.Davis@csnu.edu.

Davis, William. “Crisis. Reform. Repeat.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 37-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-422

Yale University, in the skyline of New Haven, Connecticut.
Image by Ali Eminov via Flickr / Creative Commons

 

If you have been involved in higher education in recent decades, you have noticed shifts in how courses are conceived and delivered, and what students, teachers, and administrators expect of each other. Also, water feels wet. The latter statement offers as much insight as the first. When authors argue the need for new academic models, indeed that a kind of crisis in United States higher education is occurring, faculty and administrators in higher education are forgiven if we give a yawning reply: not much insight there.

Another Crisis

Those with far more experience in academia than I will, likely, shake their heads and scoff: demands for shifts in educational models and practices seemingly occur every few years. Not long ago, I was part of the SERRC Collective Judgment Forum (2013) debating the notion that Massive Open Online Courses (MOOCs) are the future of higher education. The possibilities and challenges portended by online education would disrupt (“disruptive technologies” often represent the goals not the fears of the California culture where I live and work) the landscape of colleges and universities in the United States and the rest of the world.

Higher education would have to adapt to meet the needs of burgeoning numbers of people (at what point does one become a ‘student?’) seeking knowledge. The system of higher education faced a crisis; the thousands of people enrolling in MOOCs indicated that hordes of students might abandon traditional universities and embrace new styles of learning that matched the demands of twenty-first century life.

Can you count the number of professional crises you have lived through? If the humanities and/or social sciences are your home, then you likely remember quite a few (Kalin, 2017; Mandler, 2015; Tworek, 2013). That number, of course, represents calamity on a local level: crises that affect you, that loom over your future employment. For many academics, MOOCs felt like just such a threat.

Historian of technology Thomas Hughes (1994)[i] describes patterns in the development, change, and emergence of technologies as “technological momentum.” Technological momentum bridges two expansive and nuanced theories of technological development: determinism—the claim that technologies are the crucial drivers of culture—and constructivism—the idea that cultures drive technological change. MOOCs might motivate change in higher education, but the demands of relevant social groups (Pinch and Bijker 1984) would alter MOOCs, too.

Professors ought not fear their jobs would disappear or consolidate so precipitously that the profession itself would be transformed in a few years or decade: the mammoth system of higher education in the U.S. has its own inertia. Change would happen over time; teachers, students, and universities would adapt and exert counter-influences. Water feels wet.

MOOCs have not revolutionized models of higher education in the United States. Behind the eagerness for models of learning that will satisfy increasing numbers of people seeking higher education, of which MOOCs are one example, lies a growing concern about how higher education is organized, practiced, and evaluated. To understand the changes that higher education seems to require, we ought first to understand what it currently offers. Cathy Davidson (2017), as well as Michal Crow and William Dabars (2015), offer such histories of college and university systems in the United States. Their works demonstrate that a crisis in higher education does not approach; it has arrived.

Education in an Age of Flux

I teach at a new college in a university that opened its doors only a decade ago. One might expect that a new college offers boundless opportunity to address a crisis: create a program of study and methods of evaluating that program (including the students and faculty) that will meet the needs of the twenty-first century world. Situated as we are in northern California, and with faculty trained at Research 1 (R1) institutions, our college could draw from various models of traditional higher education like the University of California system or even private institutions (as we are) like Stanford.

These institutions set lofty standards, but do they represent the kinds of institutions that we ought to emulate? Research by Davidson (2017), Crow and Dabars would recommend we not follow the well-worn paths that established universities (those in existence for at least a few decades) in the United States have trodden. The authors seem to adopt the perspective that higher education functions like a system of technology (Hughes 1994); the momentum exerted by such systems has determining effects, but the possibility of directing the course of the systems exists nevertheless.

Michael Crow and William Dabars (2015) propose a design for reshaping U.S. universities that does not require the total abandonment of current models. The impetus for the needed transformation, they claim, is that the foundations of higher education in the U.S. have decayed; universities cannot meet the demands of the era.

The priorities that once drove research institutions have been assiduously copied, like so much assessment based on memorization and regurgitation that teachers of undergraduates might recognize, that their legibility and efficacy have faded. Crow and Dabars target elite, private institutions like Dartmouth and Harvard as exemplars of higher education that cannot, under their current alignment, meet the needs of twenty-first century students. Concerned as they are with egalitarianism, the authors note that public institutions of higher education born from the Morrill Acts of 1862 and 1890 fare no better at providing for the needs of the nation’s people (National Research Council 1995).

Crow and Dabars’s New American University model (2015, pp. 6-8) emphasizes access, discovery, inclusiveness, and functionality. Education ought to be available to all (access and inclusiveness) that seek knowledge and understanding of the world (discovery) in order to operate within, change, and/or improve it (functionality). The Morrill Acts, on a charitable reading, represent the United States of America’s assertion that the country and its people would mutually benefit from public education available to large swaths of the population.

Crow and Dabars, as well as Davidson (2017), base their interventions on an ostensibly similar claim: more people need better access to resources that will foster intellectual development and permit them to lead more productive lives. The nation benefits when individuals have stimulating engagement with ideas through competent instruction.  Individuals benefit because they may pursue their own goals that, in turn, will ideally benefit the nation.

Arizona State University epitomizes the New American University model. ASU enrolls over 70,000 students—many in online programs—and prides itself on the numbers of students it accepts rather than rejects (compare such a stance with Ivy League schools in the U.S.A.). Crow, President of ASU since 2002, has fostered an interdisciplinary approach to higher education at the university. Numerous institutes and centers (well over 50) have been created to focus student learning on issues/topics of present and future concern. For instance, the Decision Center for a Desert City asks students to imagine a future Phoenix, Arizona, with no, or incredibly limited, access to fresh water.

To engage with a topic that impacts manifold aspects of cities and citizens, solutions will require perspectives from work in disciplines ranging from engineering and the physical sciences to the social sciences and the humanities. The traditional colleges of, e.g., Engineering, Law, Arts and Sciences, etc., still exist at ASU. However, the institutes and centers appear as semi-autonomous empires with faculty from multiple disciplines, and often with interdisciplinary training themselves, leading students to investigate causes of and solutions to existing and emerging problems.

ASU aims to educate broad sections of the population, not just those with imposing standardized tests scores and impressive high school GPAs, to tackle obstacles facing our country and our world. Science and Technology Studies, an interdisciplinary program with scholars that Crow and Dabars frequently cite in their text, attracted my interest because its practitioners embrace ‘messy’ problems that require input from, just to name a few, historians, philosophers, political scientists, and sociologists. While a graduate student in STS, I struggled to explain my program of study to others without referencing existing disciplines like philosophy, history, etc. Though I studied in an interdisciplinary program, I still conceptualized education in disciplinary silos.

As ASU graduates more students, and attracts more interdisciplinary scholars as teachers, we ought to observe how their experiment in education impacts the issues and problems their centers and institutes investigate as well as the students themselves. If students learn from interdisciplinary educators, alongside other students that have not be trained exclusively in the theories and practices of, say, the physical sciences or humanities and social sciences, then they might not see difficult challenges like mental illness in the homeless population of major U.S. cities as concerns to be addressed mainly by psychology, pharmacology, and/or sociology.

Cathy Davidson’s The New Education offers specific illustrations of pedagogical practices that mesh well with Crow and Dabars’s message. Both texts urge universities to include larger numbers of students in research and design, particularly students that do not envision themselves in fields like engineering and the physical sciences. Elite, small universities like Duke, where Davidson previously taught, will struggle to scale up to educate the masses of students that seek higher education, even if they desired to do so.

Further, the kinds of students these institutions attract do not represent the majority of people seeking to further their education beyond the high school level. All colleges and universities need not admit every applicant to align with the models presented by Davidson, Crow and Dabars, but they must commit to interdisciplinary approaches. As a scholar with degrees in Science and Technology Studies, I am an eager acolyte: I buy into the interdisciplinary model of education, and I am part of a college that seeks to implement some version of that model.

Questioning the Wisdom of Tradition

We assume that our institutions have been optimally structured and inherently calibrated not only to facilitate the production and diffusion of knowledge but also to seek knowledge with purpose and link useful knowledge with action for the common good. (Crow and Dabars 2015, 179)

The institutions that Crow, Dabars, and Davidson critique as emblematic of traditional models of higher education have histories that range from decades to centuries. As faculty at a college of health sciences established the same year Crow and Dabars published their work, I am both excited by their proposals and frustrated by the attempts to implement them.

My college currently focuses on preparing students for careers in the health sciences, particularly medicine and pharmacy. Most of our faculty are early-career professionals; we come to the college with memories of how departments were organized at our previous institutions.

Because of my background in an interdisciplinary graduate program at Virginia Tech, and my interest in the program’s history (originally organized as the Center for the Study of Science in Society), I had the chance to interview professors that worked to develop the structures that would “facilitate the production and diffusion of knowledge” (Crow and Dabars 2015, 179). Like those early professors at Virginia Tech, our current faculty at California Northstate University College of Health Sciences come from distinct disciplines and have limited experience with the challenges of designing and implementing interdisciplinary coursework. We endeavor to foster collaboration across disciplines, but we learn as we go.

Crow and Dabars’s chapter “Designing Knowledge Enterprises” reminds one of what a new institution lacks: momentum. At meetings spread out over nearly a year, our faculty discussed and debated the nuances of a promotion and retention policy that acknowledges the contributions of all faculty while satisfying administrative demands that faculty titles, like assistant, associate, and full professor, reflect the practices of other institutions. What markers indicate that a scholar has achieved the level of, say, associate professor?

Originally trained in disciplines like biology, chemistry, physics, or English (coming from the interdisciplinary program of Science and Technology Studies, I am a bit of an outlier) our faculty have been disciplined to think in terms of our own areas of study. We have been trained to advance knowledge in increasingly particular specialties. The criteria to determine a faculty member’s level largely matches what other institutions have developed. Although the faculty endeavored to create a holistic rubric for faculty evaluation, we confronted an administration more familiar with analytic rubrics. How can a university committee compare the work done by professors of genetics and composition?[ii]

Without institutional memory to guide us, the policies and directives at my college of health sciences develop through collective deliberation on the needs of our students, staff, faculty, college, and community. We do not invent policy. We examine publicly available policies created at and for other institutions of higher learning to help guide our own decisions and proposals. Though we can glean much from elite private institutions, as described by Crow and Dabars, and from celebrated public institutions like the University of California or California State University systems that Davidson draws upon at times in her text, my colleagues know that we are not like those other institutions and systems of higher education.

Our college’s diminutive size (faculty, staff, and students) lends itself to agility: when a policy is flawed, we can quickly recognize a problem and adjust it (not to say we rectify it, but we move in the direction of doing so, e.g., a promotion policy with criteria appropriate for faculty, and administrators, from any department). If we identify student, staff, faculty, or administrator needs that have gone unaddressed, we modify or add policies.

The size of our college certainly limits what we can do: we lack the faculty and student numbers to engage in as many projects as we like. We do not have access to the financial reservoirs of large or long-standing institutions to purchase all the equipment one finds at a University of California campus, so we must be creative and make use of what materials we do possess or can purchase.

What our college lacks, somewhat counterintuitively, sets us up to carry forth with what Davidson (2017) describes in her chapter “The Future of Learning:”

The lecture is broken, so we must think of better ways to incorporate active learning into the classroom . . . . The traditional professional and apprentice models don’t teach students how to be experts, and so we must look to peer learning and peer mentoring, rich cocurricular experiences, and research to put the student, not the professor or the institution, at the center. (248-9)

Davidson does not contend that lecture has no place in a classroom. She champion flipped classrooms (Armbruster, Patel, Johnson, and Weiss 2009) and learning spaces that emphasize active student engagement (Elby 2001; Johnson and Johnson 1999) with ideas and concepts—e.g., forming and critiquing arguments (Kuhn 2010).

Claiming that universities “must prepare our students for their epic journey . . . . should give them agency . . . to push back [against the world] and not merely adapt to it” (Davidson 2017, 13) sounds simultaneously like fodder for a press-release and a call to action. It will likely strike educators, a particular audience of Davidson’s text, as obvious, but that should not detract from its intentions. Yes, students need to learn to adapt and be flexible—their chosen professions will almost certainly transform in the coming decades. College students ought to consider the kinds of lives they want to live and the people they want to be, not just the kinds of professions they wish to pursue.

Ought we demonstrate for students that the university symbolizes a locale to cultivate a perspective of “sympathy, empathy, sensitivity, and responsiveness” (Held 2011, p. 479)? Do we see ourselves in a symbiotic world (Margulis and Sagan) or an adversarial world of competition? Davidson, Crow, and Dabars propose a narrative of connectivity, not just of academic disciplines, but of everyday problems and concerns. Professors ought to continue advancing knowledge, even in particular disciplines, but we must not imagine that we do it alone (individually, in teams, in disciplines, or even in institutions).

After Sifting: What to Keep

Crow and Dabars emphasize the interplay between form and function as integral to developing a model for the New American University. We at California Northstate also scrutinize the structure of our colleges. Though our college of health sciences has a life and physical science department, and a department of humanities and social sciences, our full-time faculty number less than twenty. We are on college and university committees together; we are, daily, visible to each other.

With varying levels of success so far, we have developed integrated course-based undergraduate research experiences for our students. In the coming year, we aim to integrate projects in humanities and social sciences courses with those from the physical sciences. Most of our students want to be health practitioners, and we endeavor to demonstrate to them the usefulness of chemistry along with service learning. As we integrate our courses, research, and outreach projects, we aim to provide students with an understanding that the pieces (courses) that make up their education unify through our work and their own.

Team teaching a research methods course with professors of genetics and chemistry in the fall of 2017, I witnessed the rigor and the creativity required for life and physical science research. Students were often confused: the teachers approached the same topics from seemingly disparate perspectives. As my PhD advisor, James Collier, often recounted to me regarding his graduate education in Science and Technology Studies (STS), graduate students were often expected to be the sites of synthesis. Professors came from traditional departments like history, philosophy, and sociology; students in STS needed to absorb the styles and techniques of various disciplines to emerge as interdisciplinarians.

Our students in the research methods class that fall saw a biologist, a chemist, and an STS scholar and likely thought: I want to be none of those things. Why should I learn how to be a health practitioner from professors that do not identify as health practitioners themselves?

When faculty adapt to meet the needs of students pursuing higher education, we often develop the kinds of creole languages elaborated by Peter Galison (1997) to help our students see the connections between traditionally distinct areas of study. Our students, then, should be educated to speak in multiple registers depending on their audience, and we must model that for them. Hailing from disparate disciplines and attempting to teach in ways distinct from how we were taught (e.g., flipped classrooms) and from perspectives still maturing (interdisciplinarity), university faculty have much to learn.

Our institutions, too, need to adapt: traditional distinctions of teaching, scholarship, and service (the hallmarks of many university promotion policies) will demand adjustment if they are to serve as accurate markers of the work we perform. Students, as stakeholders in their own education, should observe faculty as we struggle to become what we wish to see from them. Davidson, Crow, and Dabars argue that current and future crises will not be resolved effectively by approaches that imagine problems as solely technical, social, economic, cultural, or political. For institutions of higher education to serve the needs of their people, nations, and environments (just some of the pieces that must be served), they must acclimate to a world of increasing connectivity. I know: water feels wet.

Contact details: William.Davis@csnu.edu

References

Armbruster, Peter, Maya Patel, Erika Johnson, and Martha Weiss. 2009. “Active Learning and Student-centered Pedagogy Improve Student Attitudes and Performance in Introductory Biology” Cell Biology Education—Life Sciences Education 8: 203-13.

Bijker, Wiebe. 1993. “Do Not Dispair: There Is Life after Constructivism.” Science, Technology and Human Values 18: 113-38.

Crow, Michael; and William Dabars. Designing the New University. Johns Hopkinds University Press, 2015.

Davidson, Cathy. The New Education: How to Revolutionize the University to Prepare Students for a World in Flux. Basic Books, 2017.

Davis, William, Martin Evenden, Gregory Sandstrom and Aliaksandr Puptsau. 2013. “Are MOOCs the Future of Higher Education? A Collective Judgment Forum.” Social Epistemology Review and Reply Collective 2 (7) 23-27.

Elby, Andrew. 2001. “Helping Physics Students Learn How to Learn.” American Journal of Physics (Physics Education Research Supplement) 69 (S1): S54-S64.

Galison, Peter. 1997. Image and Logic: A Material Culture of Microphysics. Chicago, IL: The University of Chicago Press.

Hughes, Thomas. 1994. “The Evolution of Large Technical Systems.” The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, MA: MIT Press.

Johnson, David, and Roger T. Johnson. 1999. “Making Cooperative Learning Work.” Theory into Practice 38 (2): 67-73.

Kalin, Mike. “The Crisis in the Humanities: A Self-Inflicted Wound?” Independent School, Winter 2017. https://www.nais.org/magazine/independent-school/winter-2017/the-crisis-in-the-humanities-a-self-inflicted-wou/

Kuhn, Deanna. 2010. “Teaching and Learning Science as Argument.” Science Education 94 (5): 810-24.

Mandler, Peter. “Rise of the Humanities.” Aeon Magazine, December 17, 2015. https://aeon.co/essays/the-humanities-are-booming-only-the-professors-can-t-see-it

National Research Council. Colleges of Agriculture at the Land Grant Universities: A Profile. Washington, D.C.: National Academy Press, 1995.

Pinch, Trevor and Wiebe Bijker. 1984. “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other.” Social Studies of Science 14: 399-441.

Smith, Merritt, and Leo Marx. 1994. Does Technology Drive History? The Dilemma of Technological Determinism

Tworek, Heidi. “The Real Reason the Humanities Are ‘in Crisis.’” The Atlantic, December 18, 2013. https://www.theatlantic.com/education/archive/2013/12/the-real-reason-the-humanities-are-in-crisis/282441/

[i] My descriptions here of technological determinism and social constructivism lack nuance. For specifics regarding determinism, see the 1994 anthology from Leo Marx and Merritt Smith, Does Technology Drive History. For richer explanations of constructivism, see Bijker (1993), “Do not despair: There is life after constructivism,” and Pinch and Bijker (1984) “The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other.”

[ii] Hardly rhetorical, that last question is live on my campus. If you have suggestions, please write me.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu

Sassower, Raphael. “Heidegger and the Sociologists: A Forced Marriage?.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 30-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3X8

The town of Messkirch, the hometown of Martin Heidegger.
Image by Renaud Camus via Flickr / Creative Commons

 

Jeff Kochan is upfront about not being able “to make everyone happy” in order to write “a successful book.” For him, choices had to be made, such as promoting “Martin Heidegger’s existential conception of science . . . the sociology of scientific knowledge . . . [and the view that] the accounts of science presented by SSK [sociology of scientific knowledge] and Heidegger are, in fact, largely compatible, even mutually reinforcing.” (1) This means combining the existentialist approach of Heidegger with the sociological view of science as a social endeavour.

Such a marriage is bound to be successful, according to the author, because together they can exercise greater vitality than either would on its own.  If each party were to incorporate the other’s approach and insights, they would realize how much they needed each other all along. This is not an arranged or forced marriage, according to Kochan the matchmaker, but an ideal one he has envisioned from the moment he laid his eyes on each of them independently.

The Importance of Practice

Enumerating the critics of each party, Kochan hastens to suggest that “both SSK and Heidegger have much more to offer a practice-based approach to science than has been allowed by their critics.” (6) The Heideggerian deconstruction of science, in this view, is historically informed and embodies a “form of human existence.” (7) Focusing on the early works of Heidegger Kochan presents an ideal groom who can offer his SSK bride the theoretical insights of overcoming the Cartesian-Kantian false binary of subject-object (11) while benefitting from her rendering his “theoretical position” more “concrete, interesting, and useful through combination with empirical studies and theoretical insights already extant in the SSK literature.” (8)

In this context, there seems to be a greater urgency to make Heidegger relevant to contemporary sociological studies of scientific practices than an expressed need by SSK to be grounded existentially in the Heideggerian philosophy (or for that matter, in any particular philosophical tradition). One can perceive this postmodern juxtaposition (drawing on seemingly unrelated sources in order to discover something novel and more interesting when combined) as an attempt to fill intellectual vacuums.

This marriage is advisable, even prudent, to ward off criticism levelled at either party independently: Heidegger for his abstract existential subjectivism and SSK for unwarranted objectivity. For example, we are promised, with Heidegger’s “phenomenology of the subject as ‘being-in-the-world’ . . . SSK practitioners will no longer be vulnerable to the threat of external-world scepticism.” (9-10) Together, so the argument proceeds, they will not simply adopt each other’s insights and practices but will transform themselves each into the other, shedding their misguided singularity and historical positions for the sake of this idealized research program of the future.

Without flogging this marriage metaphor to death, one may ask if the two parties are indeed as keen to absorb the insights of their counterpart. In other words, do SSK practitioners need the Heideggerian vocabulary to make their work more integrated? Their adherents and successors have proven time and again that they can find ways to adjust their studies to remain relevant. By contrast, the Heideggerians remain fairly insulated from the studies of science, reviving “The Question Concerning Technology” (1954) whenever asked about technoscience. Is Kochan too optimistic to think that citing Heidegger’s earliest works will make him more rather than less relevant in the 21st century?

But What Can We Learn?

Kochan seems to think that reviving the Heideggerian project is worthwhile: what if we took the best from one tradition and combined it with the best of another? What if we transcended the subject-object binary and fully appreciated that “knowledge of the object [science] necessarily implicates the knowing subject [practitioner]”? (351) Under such conditions (as philosophers of science have understood for a century), the observer is an active participant in the observation, so much so (as some interpreters of quantum physics admit) that the very act of observing impacts the objects being perceived.

Add to this the social dimension of the community of observers-participants and the social dynamics to which they are institutionally subjected, and you have the contemporary landscape that has transformed the study of Science into the study of the Scientific Community and eventually into the study of the Scientific Enterprise.

But there is another objection to be made here: Even if we agree with Kochan that “the subject is no longer seen as a social substance gaining access to an external world, but an entity whose basic modes of existence include being-in-the-world and being-with-others,” (351) what about the dynamics of market capitalism and democratic political formations? What about the industrial-academic-military complex? To hope for the “subject” to be more “in-the-world” and “with-others” is already quite common among sociologists of science and social epistemologists, but does this recognition alone suffice to understand that neoliberalism has a definite view of what the scientific enterprise is supposed to accomplish?

Though Kochan nods at “conservative” and “liberal” critics, he fails to concede that theirs remain theoretical critiques divorced from the neoliberal realities that permeate every sociological study of science and that dictate the institutional conditions under which the very conception of technoscience is set.

Kochan’s appreciation of the Heideggerian oeuvre is laudable, even admirable in its Quixotic enthusiasm for Heidegger’s four-layered approach (“being-in-the-world,” “being-with-others,” “understanding,” and “affectivity”, 356), but does this amount to more than “things affect us, therefore they exist”? (357) Just like the Cartesian “I think, therefore I am,” this formulation brings the world back to us as a defining factor in how we perceive ourselves instead of integrating us into the world.

Perhaps a Spinozist approach would bridge the binary Kochan (with Heidegger’s help) wishes to overcome. Kochan wants us to agree with him that “we are compelled by the system [of science and of society?] only insofar as we, collectively, compel one another.” (374) Here, then, we are shifting ground towards SSK practices and focusing on the sociality of human existence and the ways the world and our activities within it ought to be understood. There is something quite appealing in bringing German and Scottish thinkers together, but it seems that merging them is both unrealistic and perhaps too contrived. For those, like Kochan, who dream of a Hegelian aufhebung of sorts, this is an outstanding book.

For the Marxist and sociological skeptics who worry about neoliberal trappings, this book will remain an erudite and scholarly attempt to force a merger. As we look at this as yet another arranged marriage, we should ask ourselves: would the couple ever have consented to this on their own? And if the answer is no, who are we to force this on them?

Contact details: rsassowe@uccs.edu

References

Kochan, Jeff. Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge. Cambridge, UK: Open Book Publishers, 2017.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Eric Kerr, National University of Singapore, erictkerr@gmail.com

Kerr, Eric. “A Hermeneutic of Non-Western Philosophy.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 1-6.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3VV

Please refer to:

Image by Güldem Üstün via Flickr / Creative Commons

 

Professional philosophy, not for the first time, finds itself in crisis. When public intellectuals like Stephen Hawking, Lawrence Krauss, Sam Harris, Bill Nye, and Neil deGrasse Tyson (to list some Anglophonic examples) proclaim their support for science, it is through a disavowal of philosophy. When politicians reach for an example within the academy worthy of derision, they often find it in the footnotes to Plato. Bryan Van Norden centres one chapter of Taking Back Philosophy around the anti-intellectual and ungrammatical comment by US politician Marco Rubio that “We need more welders and less philosophers.” Although Rubio later repented, commenting approvingly of Stoicism, the school of thought that has recently been appropriated by Silicon Valley entrepreneurs, the message stuck.[1]

Two Contexts

As the Stoics would say, we’ve been here before. Richard Feynman, perhaps apocryphally, bowdlerized Barnett Newman’s quip that “aesthetics is to artists what ornithology is to birds,” proclaiming that, “philosophy of science is about as useful to scientists as ornithology is to birds.” A surly philosopher might respond that the views on philosophy of a scientist with no philosophical training is about as useful to philosophy as a bird’s view on ornithology. Or, more charitably, to point out that ornithology is actually quite useful, even if the birds themselves are not interested in it and that birds, sometimes, do benefit from our better understanding of their condition.

However, according to some accounts, philosophers within this ivory aviary frequently make themselves unemployed. “Philosophy” historically has referred simply to any body of knowledge or the whole of it.[2] As our understanding of a particular domain grows, and we develop empirical means to study it further, it gets lopped off the philosophical tree and becomes, say, psychology or computer science. While we may quibble with the accuracy of this potted history, it does capture the perspective of many that the discipline of philosophy is especially endangered and perhaps particularly deserving of conservation.

Despite this, perhaps those most guilty of charging philosophy with lacking utility have been philosophers themselves either through the pragmatic admonishings of Karl Marx (1888) or Richard Rorty (Kerr and Carter 2016) or through the internecine narcissism of small differences between rival philosophers and schools of thought.

This is, in part, the context out of which Jay Garfield and Bryan van Norden wrote an op-ed piece in the New York Times’ Stone column, promoting the inclusion of non-Western philosophy in US departments.[3] Today, the university is under threat on multiple fronts (Crow and Dabars 2015; Heller 2016) and while humanities faculties often take the brunt of the attack, philosophers can feel themselves particularly isolated when departments are threatened with closure or shrunk.[4]

Garfield and van Norden’s central contention was that philosophy departments in the US should include more non-Western philosophy both on the faculty and in the curriculum and that if they cannot do this, then they should be renamed departments of Anglo-European philosophy and perhaps be relocated within area studies. The huge interest and discussion around that article prompted van Norden to write this manifesto.

The thought that philosophy departments should be renamed departments of European or Western philosophy is not a new one. Today, many universities in China and elsewhere in Asia have departments or research groups for “Western philosophy” where Chinese philosophy and its subdisciplines dominate. In his influential text, Asia as Method, Kuan-Hsing Chen argued that, if area studies is to mean anything, it should apply equally to scholars in Asia producing “Asian studies” as to scholars in Europe:

If “we” have been doing Asian studies, Europeans, North Americans, Latin Americans, and Africans have also been doing studies in relation to their own living spaces. That is, Martin Heidegger was actually doing European studies, as were Michel Foucault, Pierre Bourdieu, and Jürgen Habermas. European experiences were their system of reference. Once we recognize how extremely limited the current conditions of knowledge are, we learn to be humble about our knowledge claims. The universalist assertions of theory are premature, for theory too must be deimperialized.” (Chen, p. 3)

Taking Back Philosophy is peppered with historical examples showing that Chinese philosophy, van Norden’s area of expertise, meets whatever standards one may set for “real philosophy”. Having these examples compiled and clearly stated is of great use to anyone putting forth a similar case and for this alone the book is a worthy addition to one’s library. These examples, piled up liberally one on top of the other, are hard to disagree with and the contrary opinion has a sordid history.

The litany of philosophers disparaging non-Western philosophy does not need to be detailed here – we all have stories and van Norden includes his own in the book. The baldest statement of this type is due to Immanuel Kant who claimed that “[p]hilosophy is not to be found in the whole Orient,” but one can find equally strong claims made among colonial administrators, early anthropologists, historians, educators, missionaries, and civil servants.[5] Without wishing to recount that history the most egregious that resonates in my mind was spoken by the British Ambassador to Thailand from 1965-1967, Sir Anthony Rumbold:

[Thailand has] no literature, no painting and hideous interior decoration. Nobody can deny that gambling and golf are the chief pleasures of the rich, and that licentiousness is the main pleasure of them all.

Taking Back Social Epistemology

Van Norden’s book wrestles, and finds its resonant anger, with these two histories: One in which professional philosophy is isolated, and isolates itself, from the rest of academy and the wider “marketplace of ideas” and one in which sub-altern and non-Western histories and perspectives are marginalized within philosophy. Since this is a journal of social epistemology, I’d like to return to a similar debate from the late 1990s and early 2000s, spearheaded by James Maffie, under the banner of ethno-epistemology.

Maffie’s bêtes noires were not primarily institutional so much as conceptual – he thought that epistemological inquiry was hampered by an ignorance of the gamut of epistemological thinking that has taken place outside of the Western world (2001, 2009). Maffie’s concern was primarily with Aztec (Mexicana) philosophy and with indigenous philosophies of the Americas (see also Burkhart 2003) although similar comparative epistemologies have been done by others (e.g. Dasti 2012; Hallen and Sodipo 1986; Hamminga 2005).

Broadly, the charge was that epistemology is and has been enthnocentric. It has hidden its own cultural biases within supposedly general claims. Given that knowledge is social, the claim that it is universal across cultures would be in need of weighty justification (Stich 1990). That Dharmottara and Roderick Chisholm derived seemingly similar conclusions from seemingly-similar thought experiments is not quite enough (Kerr 2015, forthcoming). Translation is the elephant in the room being described by several different people.[6] Language changes, of course, as does its meanings.

In ancient China, Tao had only the non-metaphorical sense of a road or pathway. It took up the first of its many abstract meanings in the Analects of Confucius. Similarly, in ancient Greece, logos had many non-metaphorical meanings, before Heraclitus gave it a philosophical one (Guthrie, 1961-1982: 1:124-126, 420-434) For epistemology, just take the word ‘know’ as an example. Contemporary philosophy departments in the English-speaking world, or at least epistemologists therein, focus on the English word ‘know’ and draw conclusions from that source. To think that such conclusions would generalize beyond the English-speaking world, sounds parochial.

Reading Taking Back Philosophy alongside Maffie’s work is instructive. The borders of philosophy are as subject to history, and boundary work by other scholars, as any other discipline and we should also be aware of the implications of Taking Back Philosophy’s conclusions beyond “professional” philosophy which may extend the proper body of knowledge to so-called “folk epistemologies”. The term “professional philosophy” restricts the object of our attention to a very recent portion of history and to a particular class and identity (Taking Back Philosophy also argues forcefully for the diversification of philosophers as well as philosophies). How do we make sure that the dissident voices, so crucial to the history of philosophy throughout the world, are accorded a proper hearing in this call for pluralism?

Mending Wall

At times, Taking Back Philosophy is strikingly polemical. Van Norden compares philosophers who “click their tongues” about “real philosophy” to Donald Trump and Ronald Reagan. All, he says, are in the business of building walls, in constructing tribalism and us-versus-them mentalities. Indeed, the title itself is reminiscent of Brexit’s mantra, “Taking Back Control.” It’s unlikely that van Norden and the Brexit proponents would have much in common politically, so it may be a coincidence of powerful sloganeering. Van Norden is a thoroughgoing pluralist: he wants to “walk side by side with Aristotle through the sacred grounds of the Lyceum … [and to] … ‘follow the path of questioning and learning’ with Zhu Xi.” (p. 159)

Where choices do have to be made for financial reasons, they would have to be made anyway since no department has space for every subdiscipline of philosophy and, analogously, we might say that no mind has space for every text that should be read.[7] Social epistemology has itself been the target of this kind of boundary work. Alvin Goldman, for example, dismisses much of it as not “real epistemology”. (2010)

As can probably be gleaned from the descriptions above, Taking Back Philosophy is also heavily invested in American politics and generally follows a US-centric slant. Within its short frame, Taking Back Philosophy draws in political debates that are live in today’s United States on diversity, identity, graduate pay, and the politicization and neoliberalization of the American model of the university. Many of these issues, no doubt, are functions of globalization but another book, which took back philosophy, from outside of the US would be a useful complement.

The final chapter contains an uplifting case for broad-mindedness in academic philosophy. Van Norden describes philosophy as being one of the few humanities disciplines that employ a “hermeneutic of faith” meaning that old texts are read in the hope that one might discover something true as opposed to a “hermeneutic of suspicion” oft-followed in other humanities and social science disciplines which emphasizes the “motives for the composition of a text that are unrelated to its truth or plausibility.” (p139) “[Philosophy is] open to the possibility that other people, including people in very different times and cultures, might know more about these things than we do, or at least they might have views that can enrich our own in some way.” (p139) The problem, he contends, is that the people “in very different times and cultures” are narrowly drawn in today’s departments.

Although Taking Back Philosophy ends with the injunction – Let’s discuss it… – one suspects that after the ellipses should be a tired “again” since van Norden, and others, have been arguing the case for some time. Philosophers in Europe were, at different times, more or less fascinated with their non-Western contemporaries, often tracking geopolitical shifts. What is going to make the difference this time? Perhaps the discussion could begin again by taking up his hermeneutic distinction and asking: can we preserve faith while being duly suspicious?

Contact details: erictkerr@gmail.com

References

Alatas, S.H. 2010. The Myth of the Lazy Native: A Study of the Image of the Malays, Filipinos and Javanese from the 16th to the 20th Century and its Function in the Ideology of Colonial Capitalism. Routledge.

Burkhart, B.Y. 2003. What Coyote and Thales can Teach Us: An Outline of American Indian Epistemology. In A. Waters (Ed.) American Indian Thought. Wiley-Blackwell: 15-26.

Chen, Kuan-Hsing. 2010. Asia as Method. Duke University Press.

Collins, R. 2000. The Sociology of Philosophies: A Global Theory of Intellectual Change. Harvard University Press.

Crow, M.M. and W.B. Dabars. 2015. Designing the New American University. Johns Hopkins University Press.

Dasti, M.R. 2012. Parasitism and Disjunctivism in Nyaya Epistemology. Philosophy East and West 62(1): 1-15.

Fanon, F.  1952 [2008]. Black Skin, White Masks, trans. R. Philcox. New York: Grove Press.

Goldman, A. 2010. Why Social Epistemology is Real Epistemology. In A. Haddock, A. Millar and D. Pritchard (Eds.), Social Epistemology. Oxford University Press: 1-29.

E.B. Goldstein. 2010. Encyclopedia of Perception. SAGE.

Guthrie, W.K.C. 1961 [1982]. A History of Greek Philosophy. Cambridge: Cambridge University Press.

Hallen, B. and J.O. Sodipo. 1986. Knowledge, Belief, and Witchcraft. London: Ethnographica.

Hamminga, B. 2005. Epistemology from the African Point of View. Poznan Studies in the Philosophy of the Sciences and the Humanities 88(1): 57-84.

Heller, H. 2016. The Capitalist University: The Transformations of Higher Education in the United States, 1945-2016. Pluto Press.

Kerr, E. 2015. Epistemological Experiments in Cross-Cultural Contexts. Asia Research Institute Working Paper Series 223: 1-27.

Kerr, E. forthcoming. Cross-Cultural Epistemology. In P. Graham, M. Fricker, D. Henderson, and N. Pedersen (Eds.) Routledge Handbook of Social Epistemology.

Kerr, E. and J.A. Carter. 2016. Richard Rorty and Epistemic Normativity. Social Epistemology 30(1): 3-24.

Maffie, J. 2001. Alternative Epistemologies and the Value of Truth. Social Epistemology 14: 247-257.

Maffie, J. 2009. “‘In the End, We have the Gatling Gun, And they have not:’ Future Prospects for Indigenous Knowledges,” Futures: The Journal of Policy, Planning, and Futures Studies, 41: 53-65.

Marx, K. 1888. Theses on Feuerbach. Appendix to Ludwig Feuerbach and the End of Classical German Philosophy. Retrieved from https://www.marxists.org/archive/marx/works/1845/theses/theses.htm

Said, E. 1979. Orientalism. New York: Vintage.

[1] Goldhill, O. “ Marco Rubio Admits he was Wrong… About Philosophy.” Quartz, 30 March 2018. Retrieved from https://qz.com/1241203/marco-rubio-admits-he-was-wrong-about-philosophy/amp/.

[2] Philosophy. Online Etymology Dictionary. Retrieved from https://www.etymonline.com/word/philosophy.

[3] Garfield, J.L. and B.W. Van Norden. “If Philosophy Won’t Diversity, Let’s Call it What it Really Is.” New York Times, 11 May 2016. Retrieved from https://www.nytimes.com/2016/05/11/opinion/if-philosophy-wont-diversify-lets-call-it-what-it-really-is.html.

[4] See, e.g., N. Power. “A Blow to Philosophy, and Minorities.” The Guardian, 29 April 2010. Retrieved from https://www.theguardian.com/commentisfree/2010/apr/29/philosophy-minorities-middleqsex-university-logic. Weinberg, J. “Serious Cuts and Stark Choices at Aberdeen.” Daily Nous, 27 March 2015. Retrieved from http://dailynous.com/2015/03/27/serious-cuts-and-stark-choices-at-aberdeen/.

[5] See e.g., Edward Said’s Orientalism (1979), Fritz Fanon’s Black Skin, White Masks (1952) and, more recently, Syed Alatas’ The Myth of the Lazy Native (2010).

[6] The reader will recall the parable wherein three blind men describe an elephant through their partial experience (the coarseness and hairiness of the tail or the snakelike trunk) but none of whom describes it accurately (e.g. In Goldstein 2010, p. 492).

[7] Several people have had the honour of being called the last to have read everything including Giovanni Pico della Mirandola, who ironically wrote the first printed book to be universally banned by the Catholic Church, and Desiderius Erasmus, after whom a European student exchange programme, facilitating cross-cultural learning is founded. Curiously, Thomas Babington Macauley is said to have been the best-read man of his time and he appears in Jay Garfield’s foreword to TAKING BACK PHILOSOPHY to voice a particularly distasteful and ignorant remark (p. xiv). We can conclude that the privilege of having read widely, or having a wide syllabus, is not enough in itself for greater understanding.

In this Special Issue, our multinational contributors share their perspective on epistemic claims and the moral implications of how one should present them via mass media.  Though the individual responses vary, they fall under two headings: 1) New Media and Social Justice, and 2) Mass Media, Popular Science, and Bad Reporting.

The PDFs of each article give specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Kj

Please refer to: Special Issue 1: “Normative Functionalism and the Pittsburgh School” and Special Issue 2: “On the Future Direction of Social Epistemology.”

I. New Media and Social Justice

Considering Online News Comments: Are We Really So Irrational and Hate Filled?
Maureen Linker, University of Michigan-Dearborn, USA

Hashtag Feminism and Twitter Activism in India
Elizabeth Losh, University of California, San Diego, USA

II. Mass Media, Popular Science, and Bad Reporting

Science and Scientism in Popular Science Writing
Jeroen de Ridder, VU University Amsterdamm NL

From Science in the Papers to Science in the News
Carlos Elías Pérez, Universidad Carlos III de Madrid, ES and Jesús Zamora Bonilla, Universidad Nacional de Educación a Distancia, ES

Free Will as an Illusion: Ethical and Epistemological Consequences of an Alleged Revolutionary Truth
Mario De Caro, Università Roma Tre and Tufts University and Andrea Lavazza, Centro Universitario Internazionale, Arezzo, Italy

Author Information: Jeroen de Ridder, VU University Amsterdam, g.j.de.ridder@vu.nl

de Ridder, Jeroen. “Science and Scientism in Popular Science Writing.” Social Epistemology Review and Reply Collective 3, no. 12 (2014): 23-39.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1KE

popular_scienceImage credit: Denise, via flickr

Abstract

If one is to believe recent popular scientific accounts of developments in physics, biology, neuroscience, and cognitive science, most of the perennial philosophical questions have been wrested from the hands of philosophers by now, only to be resolved (or sometimes dissolved) by contemporary science. To mention but a few examples of issues that science has now allegedly dealt with: the origin and destiny of the universe, the origin of human life, the soul, free will, morality, and religion. My aim in this paper is threefold: (1) to show that these claims stem from the pervasive influence of a scientistic epistemology in popular science writing, (2) to argue that this influence is undesirable because it ultimately undermines not only the important role of popular science reporting in society but also the public’s trust in science, and (3) to offer suggestions on how popular science writing can be improved.

Continue Reading…