When Is it Right to be Wrong? A Response to Lewandowsky, Kozyreva, and Ladyman, Neil Levy

In “Is Conspiracy Theorising Irrational?” (Levy 2019) I argued that conspiratorial ideation—defined as the acceptance (not the generation) of conspiracy theories—might be much more rational than we tend to think. I suggested such ideation might be subjectively rational—rational for the agent—and that it may even be objectively rational (fitted to the environment in which the agent is located). Lewandowsky, Kozyreva and Ladyman argue that I can’t possibly be correct … [please read below the rest of the article].

Image credit: Daniel Freidman via Flickr / Creative Commons

Article Citation:

Levy, Neil. 2020. “When Is it Right to be Wrong? A Response to Lewandowsky, Kozyreva and Ladyman.” Social Epistemology Review and Reply Collective 9 (2): 32-36. https://wp.me/p1Bfg0-4Ou.

🔹 The PDF of the article gives specific page numbers.

This article replies to:

Special Issue

A Kennedy conspiracy theorist dies and goes to heaven. There he meets God himself. “What really happened that day in Dallas?”, he asks God. “What was the government trying to cover up?”
“There was no cover up,” replies God. “Lee Harvey Oswald acted alone.”
“Shit,” the conspiracy theorist thinks to himself. “This goes even higher than I’d thought.”

In “Is Conspiracy Theorising Irrational?” (Levy 2019) I argued that conspiratorial ideation—defined as the acceptance (not the generation) of conspiracy theories—might be much more rational than we tend to think. I suggested such ideation might be subjectively rational—rational for the agent—and that it may even be objectively rational (fitted to the environment in which the agent is located). Lewandowsky, Kozyreva and Ladyman argue that I can’t possibly be correct, for reasons that the joke above nicely illustrates.

Conspiracy theories are very irrational, on any plausible definition of rationality, they argue. They claim that a “core component” of such ideation is “its inherently self-sealing nature” (a term they borrow from Sunstein and Vermeule (2009)). Evidence against the theory—that God himself denies it—is taken as evidence in its favor. Conspiracy theorists “wilfully” miss true positives, thereby failing to engage in actions that are genuinely in their own interests (say, having their children vaccinated). They ignore real dangers, in favor of “chimerical threats”.

It is central to my view that conspiratorial ideation is heavily dependent on testimony. I claim that the disposition to accept testimony from some sources rather than others exhibited by conspiracy theorists may be rational. Again, Lewandowsky, Kozyreva and Ladyman aren’t having it. They point out, once more, that accepting conspiracy theories is often contrary to the interests of (and therefore, in a central sense, irrational for) the marginalised. The people who are more likely to engage in such ideation may have good reason to be suspicious of various agents who exploit or manipulate them, but they turn their suspicion in precisely the wrong directions.

I concede all this. Conspiracy theorists are getting things wrong (at least in the cases that feature as paradigms in this literature) and, worse, the dispositions that underlie their mistakes may on average make things go worse for them. But none of this shows that conspiracy theorists are irrational, in the way in which I am understanding irrationality.

Rationality and Mechanisms

Rationality, here, is a property of mechanisms. For a mechanism to be rational is for it to be disposed to process information, thereby altering dispositions or credences in ways that reflect the genuine evidential value of that information (for accounts of “reasons” and “reasoning” in this spirit, see Hieronymi 2005; Sturgeon 1994; Way 2017). There is no direct connection between getting things right and responding appropriately to the evidential value of reasons. When evidence is misleading, rationality requires getting things wrong. We know, for instance, that the Church was wrong in rejecting heliocentrism (i.e., that heliocentrism is true), but there’s ongoing debate about whether they were irrational to do so: was the evidence available to them genuinely more supportive of heliocentrism than geocentrism? If it was not, the Church might have been right to be wrong.

Of course, there is a non-accidental connection between responding appropriately to the evidential force of reasons appropriately and being right. Such response is designed to get things right. A rational agent may nevertheless get things wrong because the evidence is misleading, and the evidence may be misleading either due to properties of the evidence or due to properties of the agent.

Evidence is misleading on its face, let’s say, when it would be misleading for most agents, no matter the idiosyncrasies of their processing mechanisms. If the real murderer has tampered with the fingerprints undetectably, so that someone else is identified as the criminal, the evidence is misleading on its face, and most any rational agent would end up believing wrongly. In my paper, I had a different kind of case in mind, though: one in which the evidence isn’t misleading on its face, but in which a certain class of agents process evidence such that it is misleading for them.

The idea, roughly, is this. Conspiracy theorists are ‘losers’; i.e. agents who for one reason or another are, or perceive themselves to be, socially unsuccessful (Uscinski and Parent 2014). Such agents can be expected to have their mechanisms of epistemic vigilance attuned to evidence of threat (roughly because in the environment of evolutionary adaptiveness, being low ranking correlated with being at heightened risk of aggression). I suggested such developmental (or perhaps episodic) fine-tuning of the mechanisms of epistemic vigilance might explain both the relatively high plausibility of conspiratorial explanations for losers, and also at least some of the psychological correlates of conspiratorial ideation (the correlates that Lewandowsky, Kozyreva and Ladyman cite as evidence of irrationality).

Mechanisms of epistemic vigilance generally are sensitive to cues to the plausibility of the information and the reliability of the source, where reliability is a function of competence and benevolence (Harris, 2012; Sperber et al., 2010). When someone’s mechanisms of epistemic vigilance are fine-tuned to threat (in the specific way suggested) they are more likely to regard a conspiracy theory as plausible, because they expect the world to be threatening. They are also more likely to find the sources of conspiratorial messages more trustworthy (again, relative to controls) due both to the content of the message (which may indicate shared values by appearing to offer them protection against exploitation) and often characteristics of the source of the message (e.g. political affiliation). The fine-tuning of mechanisms directly affects the relative plausibility of conspiratorial messages to ‘losers’.

Irrationality and Illusory Patterns

Lewandowsky, Kozyreva and Ladyman cite higher rates of belief in paranormal phenomena, superstition, pseudo-science, and pseudo-profound bullshit as evidence of irrationality, along with susceptibility to the conjunction fallacy, illusory pattern perception, and hyperactive agency detection (all of which I mentioned in the original article). With the possible exception of the conjunction fallacy, all of this is predicted by my account, not an objection to it.[1] Illusory pattern perception and hyperactive agency detection arise directly from heightened sensitivity to threats: patterns might indicate conspiracy or simply information nefariously hidden from the person, and sensitivity to evidence of agency is adaptive when the agent might be at risk. Higher rates of superstition and the rest follow from heightened pattern perception and agency detection: being disposed to see illusory patterns entails a greater likelihood of perceiving non-naturalistic forces at work (e.g. Brugger et al. 1993), and being disposed to detect agency has of course often been suggested to play a role in theism (Barrett 2004) as well as in the belief in ghosts and other supernatural agents (e.g., Elk 2013). Even the susceptibility to pseudo-profound bullshit might arise from illusory pattern detection.

Lewandowsky, Kozyreva and Ladyman suggest that it is irrational to prefer fake news, flat earth theories and conspiracies to official stories. But it really isn’t; not in the sense of ‘rationality’ at issue here. For those people whose mechanisms of epistemic vigilance are attuned as I suggest, it would be irrational to prefer the official sources. They would not be responding to the evidential value the information has for them. Their background beliefs and their dispositions to trust ensure that it is only by making reasoning errors that they could reject the conspiratorial explanation in favor of the truth. Their information processing mechanisms are designed to process information in just the way they do process information; they believe wrongly, but rationally.

Rational Settings?

In my paper, I also suggested that there might be a sense in which these settings themselves are rational. That is, the mechanisms of epistemic vigilance might be appropriately fine-tuned to process information in just the way they do. They are appropriately attuned because ‘losers’ really are (or really were, in the environment of evolutionary adaptation) at greater risk of exploitation, so it is rational to be attuned to evidence of such danger. Perhaps part of Lewandowsky, Kozyreva and Ladyman’s motivation for rejecting my account is that prima facie, this is very puzzling. Here’s why. If our mechanisms have been appropriately attuned to an informational environment, we ought to expect the connection between rational processing being right to be pretty reliable. But conspiracy theorists seem to get things wrong routinely.

There are at least two reasons why the unreliability of conspiracy theorists needn’t be an obstacle to acceptance of the view I’ve put forward.

First, the current environment is, of course, very unlike the environment of evolutionary adaptiveness such that being appropriately attuned to the kinds of epistemic threats that arose in that environment doesn’t entail being attuned to analogous threats today. Even though the mechanisms of epistemic vigilance might be attuned as they are to minimize epistemic threat, in the contemporary environment they may actually increase vulnerability to such threats, whether from those who have learned to take advantage of them or simply through the ways in which the informational environment has changed.

Second, the fine-tuning of these mechanisms might not be aimed to minimizing such threats across the board, but rather at minimizing threats that are most dangerous to the person, and therefore might tolerate a high rate of false positives. As Dan Kahan (2012) among others has pointed out, many of the conspiratorial beliefs that ordinary people accept have little consequence for their lives, so these errors aren’t very costly. The costs they entail might be amply worth paying for protection against much more consequential threats.

Conspiracy theorists routinely get things wrong. The official sources are more reliable than the kinds of sources they rely on, in almost all cases, and the evidence on which they rely is misleading. But they never respond to it in a way that is appropriate, given how their evidence-processing mechanisms are attuned. Appropriate response to the force of reasons is what constitutes rationality, as I am using the term here, so conspiracy theorists are rational. They are wrong, but they are right to be wrong.

Contact details: Neil Levy, Macquarie University and University of Oxford, neil.levy@philosophy.ox.ac.uk

References

Barrett, Justin L. 2004. Why Would Anyone Believe in God? Walnut Creek, CA: AltaMira Press.

Brugger, Peter Marianne Regard, Theodor Landis, Norman Cook, Denise Krebs, and Joseph Niederberger. 1993. “‘Meaningful’ Patterns in Visual Noise: Effects of Lateral Stimulation and the Observer’s Belief in ESP.” Psychopathology 26 (5-6): 261–265. https://doi.org/10.1159/000284831.

Clifford, Scott, Yongkwang Kim, and Brian W Sullivan. 2020. “An Improved Question Format for Measuring Conspiracy Beliefs.” Public Opinion Quarterly nfz049: https://doi.org/10.1093/poq/nfz049.

Harris, Paul. 2012. Trusting What You’re Told. Harvard University Press.

Hieronymi, Pamela. 2005. “The Wrong Kind of Reason.” Journal of Philosophy 102 (9): 437–457. https://doi.org/10.5840/jphil2005102933.

Kahan, Dan. 2012. “Why We are Poles Apart on Climate Change.” Nature 488, 255. https://doi.org/10.1038/488255a.

Levy, Neil. 2019. “Is Conspiracy Theorising Irrational?” Social Epistemology Review and Reply Collective 8 (10): 65-76.

Lewandowsky, Stephan, Anastasia Kozyreva, and James Ladyman. 2020. “What Rationality? A Comment on Levy’s ‘Is Conspiracy Theorising Irrational?’” Social Epistemology Review and Reply Collective 9 (2): 25-31.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. 2010. “Epistemic Vigilance.” Mind & Language 25 (4): 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x.

Sturgeon, Scott. 1994. “Good Reasoning and Cognitive Architecture.” Mind & Language 9 (1): 88–101. https://doi.org/10.1111/j.1468-0017.1994.tb00217.x.

Sunstein, Cass R. and Adrian Vermeule. 2009. “Conspiracy Theories: Causes and Cures.” Journal of Political Philosophy 17 (2): 202–227. https://doi.org/10.1111/j.1467-9760.2008.00325.x.

Uscinski, Joseph E. and Joseph M. Parent. 2014. American Conspiracy Theories. Oxford University Press.

van Elk, Michiel. 2013. “Paranormal Believers are More Prone to Illusory Agency Detection Than Skeptics.” Consciousness and Cognition: An International Journal 22 (3): 1041–1046. https://doi.org/10.1016/j.concog.2013.07.004.

Way, Jonathan. 2017. “Reasons as Premises of Good Reasoning.” Pacific Philosophical Quarterly 98 (2): 251–270. https://doi.org/10.1111/papq.12135.


[1] What about the conjunction fallacy? There is some reason to think that the association between it and conspiratorial ideation is a methodological artifact. See Clifford et al. (2020) for discussion.



Categories: Critical Replies

Tags: , , , , , , , , , , , , , , , ,

Leave a Reply