This continuing exchange (2021a) makes it clear that Neil Levy (2021, 2022b) and I agree on many things—but we do tend to emphasize different issues and framings. Much more than he, I tend to emphasize our agreement. And I get it—disagreement is the thing that fuels interesting academic discourse. However, accurately mapping the overall landscape is important too. That’s why, here again, I will tend to emphasize our areas of agreement more than he does … [please read below the rest of the article].
Stanovich, Keith E. 2022. “More Rational Disagreement, But Some Convergence Too.” Social Epistemology Review and Reply Collective 11 (3): 32-35. https://wp.me/p1Bfg0-6BE.
🔹 The PDF of the article gives specific page numbers.
❦ Levy, Neil. 2021. Is Myside Bias Irrational? A Biased Review of The Bias That Divides Us.“ Social Epistemology Review and Reply Collective 10 (10): 31–38.
❦ Stanovich, Keith E. 2021. “A Rational Disagreement about Myside Bias.” Social Epistemology Review and Reply Collective 10 (12): 48–57.
❦ Levy, Neil. 2022b. “The Bias that Unites Us: A Reply to Keith Stanovich.” Social Epistemology Review and Reply Collective 11 (1): 14-17.
On Conceptions of Rationality
In a parenthetical comment, Levy sets aside a huge area of agreement between us when he stipulates that “I set aside ecological or instrumental conceptions of rationality, and focus on epistemic rationality alone” (2022b, 14). It is perfectly acceptable for him to do this in order to locate the important parts of his argument. However, it is also fair for me to point out to potential readers that it is the instrumental conception of rationality that makes large swaths of myside processing rational, something I point out repeatedly in the book. There, I discussed many theorists who emphasize that social benefits make myside bias rational from an instrumental standpoint. I think one of the reasons I was surprised at the claim in Levy’s (2021) initial review that one of my themes was the irrationality of myside bias is that I was focused (perhaps overly focused) on the parts of the book that emphasized its rationality in the social domain.
I don’t think that Levy and I really disagree that much about these issues on the instrumental end of things; it’s just that he wants to put them aside to focus only on epistemic rationality, whereas I think putting them aside misses some important contextualizing discussions in the book—discussions that enlarge the size of the domain in which I think myside bias is not irrational. For example, I pointed out in my previous reply that I give considerable emphasis to Kahan’s discussion of the tragedy of the communications commons whereby rationality at the individual level yields a suboptimal outcome at the societal level. This emphasis on the commons dilemma makes it clear that I believe that much myside-driven polarization is the result of rational individual processing. Just as the defection response is dominant (and thus individually rational) in the traditional prisoner’s dilemma, so is displaying myside bias in Kahan’s communications commons dilemma.
My emphasis on this particular communications dilemma makes it not quite right to say, as Levy does, that the central contention of the book is that “globally irrational (albeit locally rational) myside bias explains political polarization.” Yes, globally irrational myside bias does contribute to political polarization, but so do cases where individuals are totally rational from an epistemic point of view. The purpose of the book was not to partition all of the components of political polarization (that’s a large task, and one for a political scientist in any case), but to discuss different types of myside bias.
Regarding polarization, some of it is driven by instrumentally rational processing of various social and group considerations (the book discusses many perspectives: those of identity protective cognition, expressive rationality, symbolic utility, the argumentation theory, and others). Polarization can arise in situations where the epistemic processing is impeccable. But it can also be exacerbated by epistemic processing that is overly mysided. The latter is the category of processing upon which Levy wishes to focus, and here we do not concur. But before focusing on that disagreement, I should mention a few other points of convergence.
We agree that the solution to belief polarization is to harness myside bias and create environments where it can help promote the convergence toward truth rather than impede it. We agree that the type of environments that best harness myside bias are those with considerable intellectual diversity. We agree that promoting truth is best done through managing the environment in which thinkers are embedded rather than attempting to change individual thinking dispositions.
We even agree that there are many situations in which Type-1 processing outperforms Type-2 processing. Most contemporary dual process theories contain this assumption (Evans and Stanovich 2013). However, it is not completely assuaging to know that many more situations in life are benign than are hostile, because there is a type/token problem here when it comes to costs and benefits. We cannot dismiss Type 2 thinking by saying that heuristics will get a “close enough” answer 98 percent of the time, because the 2 percent of the instances where heuristics lead us seriously astray may be critical to our lives. This point is captured in an interview in Money Magazine with Ralph Wanger, a leading mutual fund manager. Wanger said that
… [T]he point is, 99 percent of what you do in life I classify as laundry. It’s stuff that has to be done, but you don’t do it better than anybody else, and it’s not worth much. Once in a while, though, you do something that changes your life dramatically. You decide to get married, you have a baby … these rare events tend to dominate things (Zweig 2007, 102).
In terms of raw numbers, these might represent only 20-30 decisions out of thousands that we have made throughout our lives. But the thousands are just the “laundry of life” to use Wanger’s phrase. The 20 “nonlaundry” decisions are small in number and not recurring, and thus require Type-2 processing. A review of Kahneman’s (2011) popular book on dual systems got it just right: “If you’ve had 10,000 hours of training in a predictable, rapid-feedback environment—chess, firefighting, anesthesiology—then blink. In all other cases, think” (Holt 2011, 17). Humans are full of highly rational System-1 capabilities, but the logic of relying on Type-1 processing can be costly in hostile environments that are stripped of recurring, overlearned cues.
Levy and I do have an important disagreement, as he says, but even it starts at a point of agreement. I agree with Levy that certain kinds of higher order evidence is indeed evidence that would warrant projecting onto the assessment of the likelihood ratio. I have no problem with many of the examples that Levy gives. Certainly the attitudes of those we trust are useful higher-order evidence, as are sources that are plugged into “chains of testimony that trace back to genuine experts,” to use Levy’s phrase. But Levy admits that “of course there are plenty of bad cases in which information degrades across a chain of transmission.” And this is the source of our disagreement, because I do not think that an ideology or political party, per se, are connected enough to links that preserve their connection to genuine experts—particularly given the pragmatic purposes of political parties.
Forming and Projecting the Prior
First, though, I will mention a caveat to this disagreement. It is a reiteration of the point from my previous reply—perhaps a tedious reiteration, but a slippery enough distinction that is it is easy to lose in this discussion. It is the distinction between forming—or choosing—the prior and projecting the prior. When I say choosing the prior I simply mean to describe the inputs into the prior probability, P(H). The phrase “projecting the prior” is employed to mean using the prior probability as an aid in the determination of the likelihood ratio (LR). Because I share the general view of allowing few or no restrictions on the determinants of the prior, I have no objection to the use of ideology or party identification in the determination of the prior. I do balk at calling it normative when these factors are permitted, in addition, to influence the diagnosticity (LR) of new evidence.
The distinction is important, and the distinction is getting quite fuzzy and tricky when Levy says “Stanovich thinks that we are globally irrational when the priors that we (rationally) project in updating our beliefs are determined by our ‘worldview’. Instead, our priors should be determined by evidence.” I don’t claim that our priors, per se, have to be determined by evidence. The issue concerns not the determination of the prior, but whether it is going to be projected. Projection in this manner (to help assess the LR) will wildly disrupt truth convergence in a diverse community, whereas simply using one’s worldview to derive a prior will not. That’s why I license the latter as rational but not the former.
But despite what our earlier exchange might have implied, I am open to movement on this, and I could imagine being affected by empirical evidence on the issue. For instance, I would be less queasy about projecting in such a case if I could easily imagine cases where a worldview or ideology would contain nonredundant evidence (including higher-order evidence) that would not already be in the prior. As I argued in chapter 4 of the book (Stanovich 2021), there is plenty of evidence that an individual’s personal experiences and genetic predispositions determine things like worldviews and political party affiliation. The automatically assimilated information that people will have on a particular issue is likely driven by the same distal causes that determine their worldview or political party. And to the extent that the worldview or ideology would be nonredundant, I can imagine too many instances in which the nonredundant information would not be true evidence (as in “chains of testimony that trace back to genuine experts”) but instead expedient connections formed in order to win converts (consistent with my discussion of memes in chapter 4).
In chapter 6 of the book I outlined my scepticism toward political parties that bundle together positions on social issues that seem to be unconnected, or at least in principle independent: the conservatives who bundle together an anti-abortion position with the support of capital punishment—along with the liberals who bundle together a pro-abortion position with abhorrence of capital punishment. Such expedient bundling is one of the things that makes me skeptical that an ideology or a party affiliation is strongly enough connected to “chains of testimony that trace back to genuine experts” that I would feel confident in using the worldview to determine the likelihood ratio of new evidence (as opposed to using the worldview simply to determine the prior). I am open to reducing this skepticism, however, and if there is evidence that would drive me in that direction, I am sure it will be in Levy’s (2022) new book, which I await with interest.
Keith E. Stanovich, email@example.com, is Professor Emeritus of applied psychology and human development at the University of Toronto and lives in Portland, Oregon. His latest book is: The Bias That Divides Us: The Science and Politics of Myside Thinking (MIT Press).
Evans, Jonathan St. B. T. and Keith E. Stanovich. 2013. “Dual-Process Theories of Higher Cognition: Advancing the Debate.” Perspectives on Psychological Science 8 (3): 223-241.
Holt, Jim. 2011. “Two Brains Running.” New York Times Book Review November 27: 16-17.
Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus & Giroux.
Levy, Neil. 2022a. Bad Beliefs: Why They Happen to Good People. Oxford: Oxford University Press.
Levy, Neil. 2022b. “The Bias that Unites Us: A Reply to Keith Stanovich.” Social Epistemology Review and Reply Collective 11 (1): 14-17.
Levy, Neil. 2021. Is Myside Bias Irrational? A Biased Review of The Bias That Divides Us.” Social Epistemology Review and Reply Collective 10 (10): 31–38.
Stanovich, Keith E. 2021a. “A Rational Disagreement about Myside Bias.” Social Epistemology Review and Reply Collective 10 (12): 48–57.
Stanovich, Keith E. 2021b. The Bias That Divides Us: The Science and Politics of Myside Thinking. Cambridge, MA: MIT Press.
Zweig, Jason. 2007. “Winning the Home Run Hitter’s Game.” Money Magazine February: 102.