Conspiracy Theories and Relevant Epistemic Authorities: A Response to Räikkä on Pejorative Definitions, Part III, Kurtis Hagen

In this essay, I argue that, with regard to controversial conspiracy theories:

(1) Determining what the evidence indicates by appealing to expert consensus is problematic;

(2) Identifying the relevant epistemic authorities is fraught with challenges, and;

(3) The degree to which relevant epistemic authorities are reasonably regarded as reliable may vary and should not be assumed to be high.

General considerations supporting these conclusions include the following: For some conspiracy theories, some people with relevant expertise do not find those theories particularly unlikely given facts that they find most telling. Some prominent (ostensible) epistemic authorities may not have as much relevant expertise as it may seem.[1] And, the positions taken by relevant experts may be of dubious reliability if the experts are embedded in biased epistemic environments, which is often the case for prominent experts who take positions on conspiracy theories. Such considerations problematize the use of a perceived expert consensus as a proxy for truth, warrant, or even plausibility on such matters. … [please read below the rest of the article].

Image credit: Paxson Woelber via Flickr / Creative Commons

Article Citation:

Hagen, Kurtis. 2023. “Conspiracy Theories and Relevant Epistemic Authorities: A Response to Räikkä on Pejorative Definitions, Part III.” Social Epistemology Review and Reply Collective 12 (11): 42–53. https://wp.me/p1Bfg0-8gL.

🔹 The PDF of the article gives specific page numbers.

Please refer to Part I and Part II of this three-part series.

This Article Replies to:

❧ Räikkä, Juha. 2023. “Why a Pejorative Definition of ‘Conspiracy Theory’ Need Not Be Unfair.” Social Epistemology Review and Reply Collective 12 (5): 63–71.

Highlighted Resources:

❦ Hagen, Kurtis. 2023. “Implausible Conspiracy Theories: A Response to Räikkä on Pejorative Definitions, Part II.” Social Epistemology Review and Reply Collective 12 (11): 30–41. https://wp.me/p1Bfg0-8gc.

❦ Hagen, Kurtis. 2023. “Three Ways to Define Conspiracy Theories: A Response to Räikkä on Pejorative Definitions, Part I.” Social Epistemology Review and Reply Collective 12 (11): 19–29.

❦ Hagen, Kurtis. 2022b. “Are ‘Conspiracy Theories So Unlikely to Be True? A Critique of Quassim Cassam’s Concept of ‘Conspiracy Theories’.” Social Epistemology 36 (3): 329–343.

❦ Dentith, M R. X. 2018. “Expertise and Conspiracy Theories.” Social Epistemology 32 (3): 196–208.

This essay is the final part of a three-part series responding to Juha Räikkä’s defense of a pejorative definition of the term “conspiracy theory.” The first essay argued that non-pejorative descriptive definitions could avoid Räikkä’s critique of the minimalist definition, though the minimalist definition is most readily operationalizable, while the pejorative definition is worst in this respect (Hagen 2023a). In the second essay, I respond to Räikkä’s analogy involving implausible scientific theories, arguing that the analogy doesn’t hold if such theories are not also regarded pejoratively, and if they are regarded pejoratively, then they face unfair obstacles (Hagen 2023b). In the present essay, I challenge Räikkä’s assumptions regarding the significance of an apparent consensus among the relevant epistemic authorities, upon which Räikkä’s arguments rely. In particular, he overestimates the ease with which the plausibility of a conspiracy theory can be uncontroversially ascertained, which he bases on the presumed reliability of an apparent consensus of seemingly relevant ostensible epistemic authorities. These considerations all militate against defining “conspiracy theory” pejoratively.

Epistemic Authorities

In a 2007 article, Neil Levy argued that it is never rational to accept a conspiracy theory that conflicts with the position taken by properly constituted epistemic authorities.[2] On the one hand, if “epistemic authorities” means the actual epistemic authorities, then the statement is close to a tautology, as David Coady pointed out in response to Levy (Coady 2007, 198). On the other hand, if “epistemic authorities” refers instead to the ostensible epistemic authorities—such as those prominently placed or appointed to run an official investigation—then it will often be clearer (although not always beyond dispute) who these “experts” are.

But it will be less clear that they are actual epistemic authorities, or that they are the only relevant epistemic authorities. And when there is disagreement among epistemic authorities, it is not clear that one should simply rely on the majority, or purported consensus, or the most prominently situated. This is especially true when there is an imbalance in the relevant incentive structures. Presumably, those experts who publicly espouse views for which they are predictably subject to criticism, ridicule and material harm may be thought more credible, in some significant sense, than those who are, in effect, rewarded for their views. And the fact that they are predictably subject to such unpleasant results may tend to unfairly reduce their numbers, as discussed below.

Räikkä takes a position similar to Levy’s. Räikkä suggests that the contrast between conspiracy theories and the (supposedly) nearly unanimous opinion of relevant epistemic authorities against those theories renders them implausible. Raikka writes, “Experts can disagree and they can be wrong, but this does not imply that we should give up the presumption of trustworthiness when a consensus is clear enough” (68). However, even a genuine consensus among experts may be mistaken, as has often been the case in the history of science and medicine. The consensus may be wrong, though warranted by the evidence available. Or, alternatively, the consensus may have been forged more by social forces than by evidence. Further, an apparent consensus may not be real, or may be weaker than it appears.

This much of Räikkä’s point I can accept: A contrast with the consensus of experts would constitute a reason to consider a conspiracy theory unlikely. However, one’s final judgement regarding likeliness should be based on more than this one consideration. For example, the context (e.g., the corruption level of the country) also matters, as does pertinent evidence of all kinds. Further, degrees of various kinds matter. For example, the strength of this consideration will depend on the strength of the agreement among epistemic authorities. And this is not as easy to judge as Räikkä seems to assume. For many theories dismissed as “conspiracy theories,” the strength of this epistemic consensus is questionable, or so I will argue. In addition, a theory can be unlikely in a way and degree that does not, and should not, invite stigmatization (as discussed in Part II of this series, Hagen 2023b). In the following sub-sections I argue that Räikkä misjudges the ease with which the degree of unlikeliness of a theory can be determined based on appeal to epistemic authorities.

Is it Easy to Discern What the Best Evidence Indicates?

Räikkä bases his claim about the implausibility of conspiracy theories on the notion that they “conflict with the best available knowledge” and “are poorly supported by the evidence” (Räikkä 2023, 65).[3] But this is precisely what believers in individual conspiracy theories deny about those theories; how well theories labeled “conspiracy theories” are supported by evidence is contestable. While such contestation will be more plausible in some cases than others, even regarding paradigmatic conspiracy theories, such as JFK and 9/11 conspiracy theories, competent conspiracy theorists can marshal many complex lines of evidence which are often not easy to judge.

Yet Räikkä seems to believe that it is relatively easy to discern what the best evidence indicates, namely, by appealing to a (supposed) nearly unanimous opinion among the relevant experts. Specifically, regarding conspiratorial claims that are regarded as implausible, he writes, “typically relevant experts more or less unanimously conclude that the claim is unlikely in light of the facts that are presently known” (65).[4]  However, if we take “unlikely” in a weak sense, this is neither particularly pejorative nor a quality that distinguishes conspiracy theories from other theories that are not regarded as problematic. But here Räikkä implies a somewhat stronger sense of unlikely. He writes, “relevant experts (e.g., investigative journalists, various state authorities and agencies, the scientific community, and professional historians) consider the merits of the theories poor” (65).

Räikkä seems to conflate the subjective views of the presumed relevant experts with the objective merits of the theory. He slides from what they (seem to) consider the merits to be to the claim that the theories “conflict with the best available knowledge” and “are poorly supported by the evidence” (65). There are several steps where things can go wrong: Expert statements on the record may not be a sincere and complete reflection of what they think. What is thought by these experts may not represent the best interpretation of the available evidence. What evidence is available is, in part, a product of social forces.[5] And the set of presumed relevant experts might not be correctly determined. In addition, experts may be biased or operate within a biased incentive structure, or they may even sometimes be dishonest (including by omission). Further, at least in many significant cases, there are people with relevant expertise who support the conspiracy theories.

While there may seem to be few dissenting experts, in circumstances in which dissenting experts are routinely maligned, one can’t expect many to make themselves known. And, given their views, they can’t be expected to be prominently situated. They are unlikely to be selected for particularly influential roles. And if they did hold a respectable position, they may well lose it. For example, medical doctors who took heterodox positions on covid treatments soon found themselves with less prestigious affiliations than they had before voicing those positions.[6]

Let’s look a little closer at the dynamics in play here. If an expert is forced out of a prestigious institution on account of their dissent, their opinion is thereby degraded. And reputations can be degraded even without the loss of position. Mitchell Liester provides recent examples of both types: They include “doctors in Maryland and Arkansas who are being investigated for prescribing Ivermectin, which is not illegal and falls within FDA guidelines, and the Pfizer whistleblower who was fired after reporting problems with Pfizer’s COVID-19 vaccine trials” (2022, 63). Brian Martin labels such direct results of suppression of dissent a “primary effect” (2014, 147). A “secondary effect” occurs when other experts notice the primary effect and refrain from expressing similar dissent in order to avoid a similar fate. In such cases, one cannot assume that the great majority, most of whom are silent, are fully on board with the orthodoxy.

Rico Hauswald suggests a further, “tertiary,” effect. Experts who believe the orthodox position, noticing that there is little dissent, but not recognizing this to be a result of the secondary effect, assume that there is little reason to question the orthodoxy. And this may result in fewer of these experts coming to doubt the orthodox position than there may otherwise have been.[7] Pejorative labeling can produce the same types of effects: A few experts are stigmatized directly. Others self-censor to avoid such stigmatization. Still others, seeing dissent coming only from a few “cranks,” don’t see much need for critical inquiry. The appearance of a considered consensus may thus be largely illusory.

Regarding Räikkä’s analysis, there is a feedback loop that he ignores. A pejorative label helps maintain an epistemic imbalance; it plays a role in maintaining an informal (dis)incentive structure that favors some views over others. At the same time, statements by experts operating in that structure serve as the basis, in Räikkä’s analysis, for the pejorative label. In other words, a pejorative label influences dynamics that support a perceived consensus of “experts,” and this perceived consensus justifies the pejorative label.

Is it Easy to Discern Who the Relevant Epistemic Authorities Are?

Räikkä states that, usually, “it is relatively easy to identify relevant epistemic authorities” (68). At one point he suggests that relevant experts are “investigative journalists, various state authorities and agencies, the scientific community, and professional historians” (65). But these are broad categories that don’t clearly establish relevant expertise and don’t include many people who do have relevant expertise. As Tim Hayward points out, “The collective production of reliable knowledge is by no means the sole preserve of accredited institutions of epistemic authority” (Hayward 2021, 2). Further, some investigative journalists are dismissed as “conspiracy theorists,” and other properly credentialed “experts” haven’t studied the details of the cases in question. In “Expertise and Conspiracy Theories,” M Dentith (2018) has argued that, when it comes to evaluating conspiracy theories, it is not clear who the relevant experts are (see also Hagen 2022b, 335–336; 2022a, 54–57; Hayward 2021, 8–9, and 2023, 6–8).[8] As Hayward notes, “[D]etermining the relevant epistemic authorities in any given case … with respect to controversial questions, may itself be a matter of controversy” (Hayward 2021, 8).

To be an expert on a particular conspiracy, one thing that one must have expert-level knowledge about is the particular theory and the relevant facts involved. There may only be a relatively small number of people with some kind of relevant technical expertise who have also studied a particular case. When there is an official echo chamber magnifying the influence of experts who support an official account, and there are informal but serious disincentives for qualified experts to demur,[9] then there may be an appearance of a more impressive consensus than actually exists. And the existence of a small group of experts who do dissent from the official view may be more epistemically significant than Räikkä’s depiction suggests (see Hayward 2023). And further, when dissenting experts face disincentives, their dissent should arguably be regarded as more trustworthy than their positively incentivized counterparts. This is roughly analogous to the legal principle of “statement against interest,” according to which such statements are regarded as more credible than self-serving statements.

As an example of how supporting conspiracy theories can be detrimental to one’s career, New Hampshire Governor John Lynch called for a professor at the University of New Hampshire to be fired “for his rather bizarre views on the 9/11 attacks” (see Hagen 2022a, 129, for more on this and similar examples). And, more recently, many medical doctors who disagreed with the standard of care for covid patients ended up losing their jobs.

How significant is this “statement against interest” consideration when there seem to be a greater number of epistemic authorities favoring official stories? There is no general answer. One simply must consider the particulars of each case. So, simply appealing to such ostensible epistemic authorities, as a general rule, is not fully convincing. When dissenting experts exist, I maintain, it is reasonable for a layperson to consider the arguments and make judgements regarding which view makes sense, just as voters judge political candidates based on their own assessments, and juries render verdicts based on often conflicting expert testimony on subjects regarding which the jurors themselves are not experts.[10] Laypersons may be disadvantaged by their lack of domain expertise, but they are not completely without resources to make judgments. It is sometimes also reasonable for laypersons to downgrade their assessment of a particular epistemic authority, based on non-domain-specific considerations, as discussed below.

Is it Reasonable to Trust Experts Regarding Conspiracy Theories?

Räikkä maintains that “it is reasonable to trust [relevant epistemic authorities] when they (more or less unanimously)” assert that certain conspiracy theories are false (68). Räikkä here refers to several theories that he seems to assume that his readers will agree are false, and for which identifying the relevant experts may be thought unproblematic.[11] But this doesn’t seem generalizable. For some cases it seems questionable whether it is reasonable to trust the (ostensible) relevant epistemic authorities.

When it comes to the justification of war, is it reasonable to regard the political officials and the generals that are promoted in the media as the relevant experts, and to unquestioningly trust them, as we are often encouraged to do? Is it obvious that people closely tied to the pharmaceutical industry are the experts we should trust regarding pharmaceutical products? Does it help that leaders and spokespersons for regulatory agencies also support this pharmaceutical position when those people so often end up richly rewarded by pharmaceutical companies? And what should we think of the strength of the consensus when doctors and other qualified experts who might dispute the orthodox position are at risk of losing their jobs and licenses for doing so?[12] In general, can we trust conflicted experts? And can we judge the degree of consensus when incentive structures influence important dynamics?

Further, it is not only selfish motives and self-serving biases that can skew expert testimony. They might be less than perfectly honest because they think something of great importance is at stake. An expert might publicly support a view they recognize as likely to be false, or express more confidence than is warranted in one they believe or hope is true, on account of a public-spirited concern for consequences. For example, Earl Warren seems to have been influenced by the concern that, if the commission he led found a Cuban or Soviet conspiracy behind the JFK assassination, it would lead to nuclear war. And so, he seems to have been persuaded, the commission must find that Oswald was a lone gunman.[13] As Patrick Brooks expresses this dynamic, “[A]n epistemic authority asserts that P even though P is false because he or she has good reason to believe that asserting Q (even though Q is true) will lead to bad outcomes” (Brooks 2023, 17, n17). One might worry, for example, that expert support for the safety of vaccines is influenced by a well-motivated desire to prevent vaccination hesitancy (see Hagen 2022c), and that experts might exaggerate evidence for a climate emergency because they believe that it is necessary to provoke action.[14]

Agenda-driven experts can also influence public perceptions regarding whether a scientific consensus exists by declarations to that effect, by blocking heterodox research at the peer-review stage, or by producing misleading research.[15] Even experts who hold dissenting views may be motivated to maintain an illusion of consensus, as suggested by John Beatty, partly to preserve their status as experts.[16] Consequentialist concerns can influence not only expert pronouncements on the state of the evidence that currently exists but also on what kind of research will or will not be conducted.[17]

In addition, not all experts are treated the same. And such differences may not be merely based on the relevance of expertise or the level of expertise. Some experts are discredited (rightly or wrongly) and others find themselves in positions of great influence. They may be chosen for an investigatory committee or hold a high post in the government (like Anthony Fauci) or head a professional organization (like the AMA). As Tim Hayward points out, “appeal to an expert consensus can be epistemically risky if the composition of the group of designated experts appealed to is selective on a challengeable basis” (Hayward 2023, 7). One may legitimately worry that the privileged experts “could have been strategically selected on the basis that promoters of the official story have confidence that these experts’ findings will support a desired conclusion” (Hayward 2023, 7).[18]

In addition, laypersons may sometimes legitimately discount or downgrade their assessment of (ostensible) experts. Hauswald argues that laypersons very often have non-domain-specific reasons relevant to the judgment of whether an ostensible epistemic authority is a genuine epistemic authority, and that these “control reasons” may be stronger than their reasons to believe a proposition based on the endorsement of an (ostensible) epistemic authority (Hauswald 2021).

Relatedly, Brooks raises a concern about epistemic authorities behaving dismissively, which could be thought of as serving as a kind of control reason. Brooks argues that, when epistemic authorities respond with ridicule and casual dismissal to alternative hypotheses, or to questioning of apparent anomalies in official accounts, rather than with appropriate answers, they are failing to act in accordance with norms applicable to such authorities. There is a “behavioral tension” between how they are acting and how an epistemic authority should act. This tension seems to call for an explanation.

Explanations that come rather naturally to people include the authorities being biased or captured. In response to this, their authority may be downgraded and/or conspiracy theories may arise (Brooks 2023). This suggests that when the phrase “conspiracy theory” is use pejoratively by epistemic authorities, while this may serve the function of discrediting the theory in the minds of some people, it also undermines those authorities, at least in the minds of those sympathetic to the theories in question. And it is not at all clear that this is unreasonable.

One might respond, “Despite all these complications, the apparent preponderance of experts favoring one side of an issue deserves some weight. It is a relevant consideration.” Yes, but that is all it is. And how much weight it deserves in a particular case will depend on the particulars.

A Brief Reality Check

The Earth is not the center of the universe, and humans did evolve from other creatures, despite the ostensibly authoritative assurances otherwise that once prevailed. But we need not look too far back in history to find cases that may inform our assessment of whether an apparent consensus of epistemic authorities can be safely taken for granted. Here are a few recent situations that are worth thinking about in this regard:

• A large number of intelligence experts intimated that the Hunter Biden laptop situation was Russian disinformation, presenting this as the expert consensus view.[19] This seems to have been not only untrue, but intentionally deceptive.

• A group of prominent experts appear to have colluded to create the impression that there was a strong scientific consensus supporting the zoonotic origin of SARS-CoV-2. They publicly suggested that the lab leak hypothesis was just a conspiracy theory[20] and that their “analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus,” while privately making such comments as these: “The lab escape version of this is so friggin’ likely because they were already doing this work.” “I really can’t think of a plausible natural scenario.” “It’s not crackpot to suggest this could have happened given the gain of function research we know is happening.” And, “60-40 lab” (National Review 2023).

• There seemed to be an expert consensus against the use of ivermectin for covid-19. Seemingly on this basis, it was not only not deemed part of the standard of care, but some physicians were actively blocked from, or punished for, using it. And some pharmacies refused to dispense it, despite valid prescriptions. Further, ostensibly representing authoritative consensus, the FDA misleadingly implied that it is dangerous and that it was essentially meant for horses,[21] while mainstream news sources seem to corroborate this view, unfairly stigmatizing it as “horse dewormer.”[22] Whether or not ivermectin is truly effective against covid, which remains controversial, its depiction as dangerous and primarily an animal medicine was dishonest. Although there may never have been anything like a consensus of genuine epistemic authorities on these two issues, there does seem to have been a partially successful attempt to make it seem as though there was such a consensus.[23]

• Critics of lockdowns were treated as censorship-worthy spreaders of misinformation. Despite their credentials, they were not counted as relevant epistemic authorities. Jay Bhattacharya of Stanford, Martin Kulldorff of Harvard, and Sunetra Gupta of Oxford, for example, were dismissively referred to as “fringe epidemiologists” by Francis Collins in an email to Anthony Fauci, who seem to have conspired to coordinate a “takedown” of their position (Gillespie 2022). And a prominent German virologist, Christian Drosten, referred to the three as “pseudo-experts” (FOS-SA 2021). They were framed as either not relevant or not genuine epistemic authorities. And yet, it seems, they should have been given a fairer hearing.

Much more could be said. Indeed, books have been written about such matters. But I’ll end the reality check here. For if one delves too far into details in support of the heterodox side of such issues, one runs the risk of being dismissed as a “conspiracy theorist.”

Conclusion

Räikkä suggests that (the appearance of) expert consensus, which is supposedly nearly unanimous, in opposition to conspiracy theories justifies the belief that such theories are implausible, and that this implausibility justifies defining the term pejoratively. But there are many problems with this. For one thing, there are many steps between the appearance of an expert consensus and actual implausibility of the degree required to justify criticism of those who investigate the supposedly implausible claims. The full spectrum of relevant experts must be properly identified, and this is not as easy as it might at first seem. Further, we must accurately identify all the relevant conflicts and incentive structures and make the proper credibility adjustments. But that is unmanageable.

Alternatively, we must assume that these relevant experts are honest and not strongly biased in a particular direction. But the degree to which those are safe assumptions will depend on the specific context, which must be considered for each case. Reasonable people may disagree in their assessment of this, but even in supposedly open societies with relatively low levels of obvious corruption, there is often much at stake in these matters. So, even in such societies, conflicts of interest and incentive structures ought to be considered. And the significance of such considerations may vary depending on the nature of the conspiracy theory. This can’t simply be judged at a general level. And thus, the case for defining “conspiracy theory” pejoratively based on the assumed general reliability of relevant ostensible epistemic authorities is unconvincing.

This concludes a three-part response to Räikkä’s article defending a pejorative definition of “conspiracy theory.” I focused on three general problems with Räikkä’s arguments:

(1) His critique of the minimalist definition doesn’t apply to non-minimalist (descriptive) non-pejorative definitions, and also misses a significant disadvantage of using pejorative definitions for scientific research.

(2) His argument trades on ambiguities in words like “implausible.” When meanings are held constant, Räikkä’s analogy with implausible scientific theories does not imply that a pejorative definition of “conspiracy theory” would be unproblematic.

(3) Räikkä makes unsafe assumptions about the reliability of what may seem to be a consensus of the purportedly relevant ostensible epistemic authorities.

Taken together, these considerations suggest that “conspiracy theory” should not be defined pejoratively. Specifically, the case in favor of such a definition is unconvincing because of its problematic reliance on ostensible epistemic authorities and on various equivocations, such as on the meaning of “implausible.” And the case against the viability of a descriptive non-pejorative definition is, at best, underdeveloped. In addition, the common objection to a pejorative definition based on its pernicious effects on inquiry remains persuasive. And further, a pejorative definition would problematize attempts to do objective scientific research on the category “conspiracy theories,” and on people who believe such theories, since which cases should qualify will depend on unavoidably controversial assessments.

Acknowledgements 

I thank Brian Martin, Rico Hauswald, and Patrick Brooks for their helpful comments on earlier versions of this essay.

References

Andersen, Kristian G., Rambaut, Andrew, Lipkin, W.I. et al. 2020. “The Proximal Origin of SARS-CoV-2.” Nature Medicine 26: 450–452. https://doi.org/10.1038/s41591-020-0820-9.

Attkisson, Sharyl. 2008. “Leading Dr.: Vaccines-Autism Worth Study.” CBS News.com May 12. https://www.cbsnews.com/news/leading-dr-vaccines-autism-worth-study/.

Beatty, John. 2006. “Masking Disagreement among Experts.” Episteme 3 (1-2): 52–67. doi: 10.1353/epi.0.0001.

Bertrand, Natasha. 2020. “Hunter Biden Story is Russian Disinfo, Dozens of Former Intel Officials Say.” Politico October 19. https://www.politico.com/news/2020/10/19/hunter-biden-story-russian-disinfo-430276.

Brooks, Patrick. 2023. “On the Origin of Conspiracy Theories.” Philosophical Studies https://doi.org/10.1007/s11098-023-02040-3.

Coady, David. 2007. “Are Conspiracy Theorists Irrational?” Episteme 4 (2): 193–204. https://www.muse.jhu.edu/article/228146.

Cook, John, Dana Nuccitelli, Sarah A. Green, Mark Richardson, Bärbel Winkler, Rob Painting, Robert Way, Peter Jacobs, Andrew Skuce. 2013. “Quantifying the Consensus on Anthropogenic Global Warming in the Scientific Literature.” Environmental Research Letters 8 (2): 1–7. doi: 10.1088/1748-9326/8/2/024024.

Dentith, M R. X. 2018. “Expertise and Conspiracy Theories.” Social Epistemology 32 (3): 196–208. doi: 10.1080/02691728.2018.1440021.

FOS-SA. 2021. “‘Pseudo-Experts’: Drosten Defamed Well-known Colleagues from Harvard, Oxford and Stanford.” Freedom Of Speech April 8. https://fos-sa.org/2021/04/08/pseudo-experts-drosten-defamed-well-known-colleagues-from-harvard-oxford-and-stanford/.

Gillespie, Nick. 2022. “Dr. Jay Bhattacharya: How to Avoid ‘Absolutely Catastrophic’ COVID Mistakes.” Reason April 20. https://reason.com/podcast/2022/04/20/dr-jay-bhattacharya-how-to-avoid-absolutely-catastrophic-covid-mistakes/

Goldman, Alvin I. 2021. “How Can You Spot the Experts? An Essay in Social Epistemology.” Royal Institute of Philosophy Supplements 89: 85-98. doi: 10.1017/S1358246121000060.

Goldman, Alvin I. 2018. “Expertise.” Topoi 37: 3-10. doi: 10.1007/s11245-016-9410-3.

Goldman, Alvin I. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–109. https://doi.org/10.1111/j.1933-1592.2001.tb00093.x

Hagen, Kurtis. 2022a. Conspiracy Theories and the Failure of Intellectual Critique. Ann Arbor: University of Michigan Press.

Hagen, Kurtis. 2022b. “Are ‘Conspiracy Theories So Unlikely to Be True? A Critique of Quassim Cassam’s Concept of ‘Conspiracy Theories’.” Social Epistemology 36 (3): 329–343. https://doi.org/10.1080/02691728.2021.2009930

Hagen, Kurtis. 2022c. “Vaccination and Intellectual Honesty: Reflections on a Theme in Recent SERRC Articles.” Social Epistemology Review and Reply Collective 11 (5): 71–77. https://wp.me/p1Bfg0-6QF.

Hagen, Kurtis. 2022d. “Misinterpretation of Statistical Nonsignificance as a Sign of Potential Bias: Hydroxychloroquine as a Case Study.” Accountability in Research 1-20. doi: 10.1080/08989621.2022.2155517.

Hauswald, Rico. 2021. “The Weakness of Weak Preemptionism.” The Philosophical Quarterly 71 (1): 37-55. https://doi.org/10.1093/pq/pqaa024.

Hayward, Tim. 2023. “The Applied Epistemology of Official Stories, Social Epistemology.” Social Epistemology 1–21. doi: 10.1080/02691728.2023.2227950.

Hayward, Tim. 2021. “‘Conspiracy Theory’: The Case for Being Critically Receptive.” Journal of Social Philosophy 53 (2): 148–167. doi: 10.1111/josp.12432.

Levy, Neil. 2007. “Radically Socialized Knowledge and Conspiracy Theories.” Episteme 4 (2): 181–192. doi: 10.3366/epi.2007.4.2.181.

Liester, Mitchell B. 2022. “The Suppression of Dissent During the COVID-19 Pandemic.” Social Epistemology Review and Reply Collective 11 (4): 53-76. https://wp.me/p1Bfg0-6Jw.

Martin, Brian. 2014. “Dissent in Science.” In Science and Politics: An A-to-Z Guide to Issues and Controversies edited by Brent S. Steel, 145–149. Los Angeles: Sage.

Myers, Steven Lee. 2023. “A Federal Court Blocks California’s New Medical Misinformation Law.” The New York Times January 26. https://www.nytimes.com/2023/01/26/technology/federal-court-blocks-california-medical-misinformation-law.html

National Review. 2023. “The Covid Cover-up.” August 1. https://www.nationalreview.com/2023/08/the-covid-cover-up/

Olmsted, Kathryn S. 2009. Real Enemies: Conspiracy Theories and American Democracy, World War I to 9/11. New York: Oxford University Press.

Ordoña, Michael. 2021. “Joe Rogan Says CNN Lied about his COVID-19 Treatment. Don Lemon Says that’s Not True.” Los Angeles Times October 15. https://www.latimes.com/entertainment-arts/story/2021-10-15/joe-rogan-covid-19-treatment-don-lemon-cnn.

Räikkä, Juha. 2023. “Why a Pejorative Definition of ‘Conspiracy Theory’ Need Not Be Unfair.” Social Epistemology Review and Reply Collective 12 (5): 63–71. https://wp.me/p1Bfg0-7Pf.

Schneider, Stephen H. 1996. “Don’t Bet All Environmental Changes Will be Beneficial.” APS News (The American Physical Society) 5 (8): 5. https://www.aps.org/publications/apsnews/199608/environmental.cfm.


[1] As an example, in debates about the wisdom of water fluoridation, dentists are often deferred to as relevant experts. But the effects of fluoride on teeth, while relevant, are not actually central to the debate, which is more about health risks and impacts on IQ.

[2] For my response to Levy’s arguments, see Hagen 2022a, 53–57.

[3] Hereafter, when citing Räikkä 2023, I simply provide the page number, omitting “Räikkä 2023.”

[4] It is not entirely clear what would constitute “more or less” unanimous agreement among experts. In the case of 9/11, for example, as Hayward notes, “[S]ignificant numbers of relevant experts in the cited fields [i.e., engineers, politics professors, security experts and journalists] have articulated substantial doubts about the official conspiracy theory, and some of these can be found explicated in authoritative publications” (Hayward 2021, 10). I’ve made similar observations (see Hagen 2022a, 26, 57, 114, 138, and 161).

[5] As an example, consider: CBS News describes Bernadine Healy, former director of the National Institutes of Health, as suggesting that “public health officials have intentionally avoided researching whether subsets of children are ‘susceptible’ to vaccine side effects—afraid the answer will scare the public” (Attkisson 2008).

[6] Examples include Paul Marik, Pierre Kory, and Peter McCullough.

[7] Hauswald makes this point in a forthcoming book (in German) on epistemic authority (personal communication, October 29, 2023).

[8] For more general discussions of the problems involved in judging expertise, see Goldman 2001, 2018, and 2021.

[9] As Hayward notes, “[A] clear purpose for fostering the very concept of ‘conspiracy theory’ has, in practice, been to disparage it so that people who desire to have a reputation as intellectually serious, or even just sensible, are discouraged from engaging in it” (Hayward 2021, 5). Though informal, this can be a significant disincentive.

[10] As an example, consider that the (ostensible) consensus view for a period of time was that so-called “natural immunity” could be reasonably disregarded when evaluating whether covid vaccination should be mandated. Unvaccinated people who had recovered from covid were actually denied organ transplants and died apparently as a result. When something that extreme is occurring, something is presumably operating as an ostensible scientific consensus to justify it. But even a layperson is capable of seeing that it doesn’t make sense in the light of what was commonly known.

[11] Specifically, Räikkä gives the following examples: “that climate change is a fact, that the Moon landing was not faked, that there are no space lizards, that Elvis Presley is dead, that Obama’s actual birthplace was not Kenya, that vaccines do not contain microchips, that 2020 election was not stolen, and so on” (68).

[12] An extreme example is the California covid misinformation law that allowed for doctors to be punished for giving patients “misinformation.” Misinformation was defined as “false information that is contradicted by contemporary scientific consensus contrary to the standard of care” (Myers 2023). This seems to equate contradicting the (ostensible) consensus with being false. A federal judge blocked enforcement of the law, ruling that it could have a chilling effect on doctors’ communication with patients. Although the law was not enforced, and was limited in jurisdiction anyway, this case nevertheless dramatizes how (ostensible) consensus can be self-reinforcing.

[13] For more on the “benign” coverup view of the JFK assassination, see Hagen 2022a, 212–215. See also Olmsted 2009, 116–119, particularly for the background on Warren’s beliefs.

[14] Climatologist Stephen Schneider once stated: “[W]e have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. Each of us has to decide what the right balance is between being effective and being honest” (Schneider 1996). It should be noted that Schneider was not advocating dishonesty, and that he objects to this quotation being taken out of context to suggest otherwise. However, he does seem to be acknowledging a real ethical tension.

[15] Agenda-driven experts could even produce flawed research purportedly documenting the consensus. As an example, John Cook, co-founder and president of Skeptical Science, which runs a website with a mission to “debunk misinformation” about climate change, was the lead author of a paper that purported to establish a 97% consensus among climate scientists regarding anthropogenic global warming, glossed as the idea that “human activity is very likely causing most of the current” global warming (Cook 2013, 2, emphasis added). However, the authors count as part of the consensus statements affirming merely that human activities “contribute to global climate change” (3, emphasis added) as well as merely implicit endorsements. Many scientists who are generally categorized as climate change “skeptics,” or even “deniers,” could be, by this measure, included in the “consensus,” as they often acknowledge some human influence on climate.

[16] Beatty suggests, “[D]ownplaying their current disagreements might be crucial to gaining the public’s confidence. This could be self-serving, in that the public’s confidence might also pay off in terms of financial support. But it may not be entirely self-serving. The group of scientists in question might believe that they really are the public’s best advisors in the long run, and that the only way to convince the public is by downplaying their differences in the short run” (Beatty 2006, 54).

[17] See footnote 5, above.

[18] In general, regarding official stories (and presumably conspiracy theories as well), “the nature of the case to be investigated just does not neatly fall under the clear purview of any established specialism or collaboration. There may simply be no particular designated expert group that is uniquely or fully authoritative with respect to the matter at hand” (Hayward 2023, 7). See also Dentith 2018; and Hagen 2022a, 25–26, 53–57.

[19] Under the headline, “Hunter Biden Story is Russian Disinfo, Dozens of Former Intel Officials Say,” Politico reported, “More than 50 former senior intelligence officials have signed on to a letter outlining their belief that the recent disclosure of emails allegedly belonging to Joe Biden’s son ‘has all the classic earmarks of a Russian information operation’” (Bertrand 2020).

[20] They tried to wriggle out of this by claiming to mean one thing while rhetorically suggesting something else. Specifically, Kristian Andersen testified that he was referring to an extreme version of the theory in which the virus was intentionally engineered to be used as a bioweapon. However, their public statement had the clear rhetorical effect of suggesting that the virus could not have been a product of gain-of-function research, as the first quotation in the main text reveals (see also Andersen et al. 2020, 450).

[21] See the FDA twitter post: https://twitter.com/US_FDA/status/1429050070243192839. Defending themselves in court over the matter, the FDA argued later that they meant only that people should not take the formulations meant for animals. But their messaging frequently lacked such nuance.

[22] CNN famously suggested that Joe Rogan had taken “horse dewormer,” though he had actually taken ivermectin that was formulated for human consumption and prescribed to him by a doctor. See Ordoña 2021. Note that I regard this LA Times article to be less than entirely neutral. It’s confirmation of my claim should be seen through the lens of “statement against interest.”

[23] Similar remarks could be made about hydroxychloroquine. I discuss the way this product was unfairly evaluated in the medical literature in Hagen 2022d.



Categories: Critical Replies

Tags: , , , , , , ,

2 replies

  1. I would like to thank Kurtis Hagen for thoughtful and detailed comments on my paper “Why a Pejorative Definition of ‘Conspiracy Theory’ Need Not Be Unfair.” Social Epistemology Review and Reply Collective 12 (5): 63–71. https://wp.me/p1Bfg0-7Pf. Possibly, one reason for the disagreements is that we have a different view about epistemic dependence and our chances to escape it. I am pessimistic and I don’t share the traditional and romantic epistemological view that a rational belief-formation consists solely of one’s own independent judgment. Of course, identifying relevant experts can sometimes be difficult but, in the main, people seem to be able to do it. — from Juha Räikkä posted by Jim Collier.

  2. Hi Juha,

    Doesn’t this strike you as circular argument?

    “Of course, identifying relevant experts can sometimes be difficult but, in the main, people seem to be able to do it.”

    As for your concerns, they are well placed but hardly romantic,

    “Possibly, one reason for the disagreements is that we have a different view about epistemic dependence and our chances to escape it. I am pessimistic and I don’t share the traditional and romantic epistemological view that a rational belief-formation consists solely of one’s own independent judgment.”

    “Possibly” is easily purchased. The point is a rational belief system does in fact depend on every member of a population forming their own view on the evidence and interacting on that personal view. It’s not romantic, sometimes it’s divorce, (rarely) but it’s democracy.

    Cheers~
    Lee

Leave a Reply