A Rational Disagreement about Myside Bias, Keith E. Stanovich

Who says that book reviewing is dead? Within just a couple of weeks of the appearance of my new book, The Bias That Divides Us, it received two reviews that were in-depth and theoretically astute—one destined for the American Journal of Psychology by Joachim Krueger (forthcoming) and the other here on the SERRC by Neil Levy (2021). Both display considerable erudition and a deep engagement with the ideas in the book … [please read below the rest of the article].

Image credit: MIT Press

Article Citation:

Stanovich, Keith E. 2021. “A Rational Disagreement about Myside Bias.” Social Epistemology Review and Reply Collective 10 (12): 48-57. https://wp.me/p1Bfg0-6nr.

This article replies to:

❧ Levy, Neil. 2021. “Is Myside Bias Irrational? A Biased Review of The Bias that Divides Us.Social Epistemology Review and Reply Collective 10 (10): 31-38.

🔹 The PDF of the article gives specific page numbers.

I will organize my response to Levy’s review around: one minor point that I would like to clarify; one somewhat minor point of outright disagreement; and, finally, a global point that represents a major disagreement about the book’s theme. I will turn to the two minor issues first.

Memes and Myside Bias

Levy appears to have misunderstood the role that memetics plays in the book. He feels that “it is unnecessary because it doesn’t seem to do any explanatory work that the distinction between testable and distal beliefs doesn’t already do.” However, the primary reason that memes and memetics were brought into the discussion was to show that there is a kind of content-based theory that can account for the odd individual difference findings concerning myside bias—findings so odd that they make myside bias an outlier bias.

It is an outlier bias because virtually all cognitive biases that have been discovered show negative correlations with most measures of cognitive sophistication (Stanovich, 2011; Stanovich, West, and Toplak 2016). People of higher intelligence are more likely to avoid most biases and people scoring higher on various rational thinking dispositions are more likely to avoid biased responding. In addition to these consistent relationships, most of the biases in the literature show some degree of domain generality – those showing more bias on one task and/or domain tend to show a higher bias on a different task or in a different domain.

None of these relationships hold for myside bias. Instead, when cognitive sophistication increases, either by increases in intelligence or in thinking dispositions that are correlated with deeper thought, there is not a corresponding decrease in myside bias. Additionally, myside bias shows little domain generality—there is no tendency for those showing high myside bias in one domain to show high myside bias in another domain. All of these findings taken together suggest that to explain variation in myside bias we need to turn from theories that emphasize differences in personal psychology to differences in the content of beliefs themselves. This is precisely what work in memetics and cultural replicator theory suggests.

For a psychologist, that’s a big theoretical step, because social and cognitive psychologists traditionally tend to ask what it is about particular individuals that leads them to have certain beliefs. The causal model is one where the person determines what beliefs to have. Instead, the facts about myside bias lead us to ask a different question: What is it about certain beliefs that leads them to collect many “hosts” for themselves? That is, it might not be people who are characterized by more or less myside bias, but beliefs that differ in the degree of myside bias they engender.

It was to make that theoretical step—to explain the outlier individual difference findings by deemphasizing stable psychological traits and to instead emphasize a theory focussed on belief content—that I introduced memetic ideas. It wasn’t primarily to “situate the distinction between convictions and testable beliefs within a broader theory” as Levy states. He is correct in stating that the memetic ideas are not necessary for a discussion of the conviction/testable distinction, but that wasn’t the prime purpose for introducing memetics in the book. The primary purpose was to introduce a theory that could accommodate the startling individual difference results concerning myside bias.

Levy does not like the memetic account because he says he is “unable to imagine a mechanism whereby memeplexes with the capacity to recognize and fight off undesirable credences could develop.” The language of intentionality (“recognize,” “fight off”) is obscuring thinking here. Mindless and unconscious evolution by selection is the actual cause of the properties we see in currently successful memes—properties like lack of falsifiability, which has been much discussed. Just as in the case of genes, we miss the evolutionary insight if we don’t, in the end, jettison the intentional/anthropomorphic language, which is merely a shorthand. It is only there so that we can say “replicators developed protective coatings of protein to ward off attacks” rather than the awkward “replicators that built vehicles with coatings of protein became more frequent in the population” every single time. The language of genes having “goals” or “interests” can always be cashed out in the language of replicator selection. The same is true for memes. But if we leave in too much of the intentional language we miss what Dennett (2017) calls one of the great insights contained in the meme concept—that cultural artifacts can be built through a series of unconscious decisions rather than what Dennett calls “conscious uptake”.

Unconscious Processing

I don’t agree with Levy’s statement that “largely unconscious social learning might lead to credences just as accurate as those shaped by conscious reflection.” While both mechanisms lead to accurate information more often than not, reflective thought is essential in environments that are hostile rather than benign. A benign environment is one that contains useful (that is, diagnostic) cues that can be exploited by various heuristics. To be classified as benign, an environment must also contain no other individuals who will adjust their behavior to exploit those relying only on heuristic processing. In contrast, a hostile environment for heuristics is one in which there are few cues that are usable by unconscious processes—or there are misleading cues. Also, an environment can turn hostile for a user of heuristic processing when other agents discern the simple cues that are being used and arrange them for their own advantage (for example, advertisements, or the strategic design of supermarket floor space in order to maximize revenue).

The modern world tends to create situations where the default values of evolutionarily adapted cognitive systems are not optimal. This puts a premium on the use of reflective processing capacity to override automatic responses (Kahneman 2011; Stanovich 2004, 2011). Modern technological societies continually spawn situations where humans must decontextualize information—where they must deal abstractly and in a depersonalized manner with information rather than in the context-specific way of autonomous processing modules. The abstract tasks studied by the heuristics and biases researchers often accurately capture this real-life conflict. Additionally, market economies contain agents who will exploit nonreflective responding for profit (better buy that “extended warranty” on a $150 electronic device!). This again puts a premium on overriding automatic responses that will be exploited by others in a market economy. Modernity’s increasing hostility toward humans who overweight the unconscious mind was one reason that Kahenman and Tversky’s heuristics and biases research program emphasized hostile environments. So if Levy is right, then they better take Danny Kahneman’s Nobel Prize back (and Richard Thaler’s as well!).

And finally, on social learning, Levy caricatures my view. He says that, regarding beliefs formed by unconscious social learning, there is no reason “to conclude that they are therefore irrational.” This is a statement that implies that I am calling all the products of unconscious social learning irrational, something that I of course do not do in the book or in any other publication. This indirect framing of my beliefs is just one example of a major difference between Levy and me in how we see the major themes of the book. It is just one example where Levy seems to view me as making very broad attributions of irrationality. In fact, the attribution of irrationality is not what I see as a major theme of the book. But somehow Levy does view it that way, so I will turn to that next.

Where Did All This Irrationality Come From?

Levy seems to view the attribution of irrationality to myside bias as a major theme of the book. His essay is titled as a question, but littered throughout the review is the puzzling conclusion that I am saying this repeatedly throughout the volume: “Stanovich insists that myside bias is often irrational” “He also thinks that myside bias is irrational” “but that’s irrational (Stanovich claims)” “I’ve focused on Stanovich’s claim that myside bias is often irrational” “His defence of the irrationality of myside bias”. This is all very puzzling to me because I spend an entire chapter of the book discussing normative issues and come to the conclusion that it is very difficult to attribute irrationality to myside bias in most situations. My summary, at the end of chapter 2, is that:

The analyses we have reviewed in this chapter confirm the individual rationality of using prior beliefs to aid in the evaluation of new evidence (Koehler 1993). They support the natural tendency to use one’s own perspective in interpreting the meaning of new data (Jern et al. 2014). They confirm the rationality of our epistemic activities being in part determined by our larger instrumental goals. And finally, they confirm the rationality of taking meaningful group affinities into account when we update beliefs (52-53).

This summary seems to fly in the face of Levy’s blanket assertation that I “insist that myside bias is often irrational.” The paragraph above lists several large situation types where I argue that irrationality cannot be attributed to the display of myside bias. Indeed, Krueger (2022), in his review of my book, attributes to me a stance opposite to that conjectured by Levy—that I am on the verge of claiming that no kind of myside bias is irrational and that I am nervous about a conclusion that extreme: “It is not easy to determine whether these restrictive conditions are met, and Stanovich expresses some nervousness over the possibility that rationality might dissolve as a tractable construct.”

Krueger’s (2022) attribution here is entirely correct. He discerns that I have taken the reader through a variety of situations where we might want to call myside bias irrational at the level of the individual, and that in almost all of these situations the charge of non-normativity is debunked. He also correctly discerns that I am nervous about this state of affairs—that I would like to demarcate at least some small partitioning of myside reasoning states that we could sanction as biased, but that I end up admitting that it is hard to do so.

I think that Levy and I end up being so far apart in our interpretation of the book’s major theme because of a concatenation of misunderstandings. First, Levy begins with the claim that I suggest that we “not defer to the consensus” of scientists and experts. This is not the case. In chapter 2, I discussed the Koehler (1993) study, whose conclusion I agree with. He demonstrated that when interpreting new experimental evidence on ESP, that a working scientist who is considering this evidence should use scientific consensus in establishing the prior probability that the evidence is legitimate; and, furthermore, that such a scientist would be justified in projecting the prior on to the new evidence (that is, in using the prior to help estimate the likelihood ratio of the evidence).

It is likewise in Figure 2.1 of the book where I treat a person’s knowledge base and their worldview as different determinants of their prior probability. It should be clear from the text that expert and scientific consensus are part of a person’s knowledge base and thus legitimate determinants not only of the prior probability but, in many situations, legitimate contextual information when evaluating source credibility as well.

But now let me turn to worldview, which I identify as not the same as expert and scientific consensus. Here is where the second misunderstanding arises, I think, partly due to my failure to stress more strongly in the book a distinction I will make now: that choosing the prior and projecting the prior are not the same thing in my usage. When I say choosing the prior in the book I simply mean to describe the inputs into the prior probability, P(H), in a standard Bayesian analysis. In contrast, the phrase “projecting the prior,” is employed to mean using the prior probability to frame the interpretation of new data (Cook and Lewandowsky 2016; Gershman 2019; Jern et al. 2014) and/or using the gap between new data and the prior probability to help ascertain the credibility of the information source (Druckman and McGrath 2019; Gentzkow and Shapiro 2006; Hahn and Harris 2014; Koehler 1993; Tappin et al. 2020). Choosing the prior concerns the determination of P(H), whereas projecting the prior concerns the determination of the likelihood ratio, LR = P(D/H)/P(D/notH).

Thus, I am not arguing that it is irrational to hold convictions or to use them in forming priors (or to use social referencing). I am just saying that it is wrong to project them onto the likelihood ratio in addition to the prior. Levy says that “those people who identify strongly with political parties and are also political junkies defer across the board, but that’s irrational (Stanovich claims).” However, it is not the case that I view this as irrational as long as they are deferring in the formation of their prior. However, the value of new evidence shouldn’t be determined by such deference as well.

In short, I share the general view of allowing few or no restrictions on the determinants of the prior probability, P(H). That would definitely include determinants such as a non-empirically derived worldview (a worldview not properly considered part of the knowledge base). Importantly, I do not require a prior of .50 when there is no testable information, as Levy suggests a couple of times. I do question, though, giving epistemic license to projecting onto the likelihood ratio a prior determined purely by worldview. Projection in this manner (to determine the LR) will wildly disrupt truth convergence in a diverse community, whereas simply using one’s worldview to derive a prior will not.

The distinction I am pressing in chapter 2 is that it is fine to use scientific consensus at two junctures in the evidence evaluation process (during the determination of the prior and to provide context for the evaluation of the likelihood ratio); but a worldview that is ideologically-based rather than based on scientific consensus should be used only once—to aid in the determination of the prior probability. Yet, in the book, I display considerable “nervousness” because neither proof B from Koehler (1993) nor the Jern et al. (2014) analysis that I rely on make this distinction.

In short, at the individual level, I analyse many situations where myside bias has been deemed irrational in the traditional heuristics and biases literature and argue, quite the opposite of what Levy implies, that the earlier literature was wrong in the attribution of irrationality. I reserve judgment in only one case, that of projecting a non-empirical worldview on new evidence at two different stages.

Thus, in the book, I take a fairly ecumenical and Panglossian stance (see Stanovich, 1999, 2004) toward myside bias at the level of the individual person. The heart of my worry about myside bias is at the societal level where, due to the “tragedy of the communication commons,” we cannot converge on the truth and yet every party in our collective is individually rational at the individual level. The tragedy of the communications commons is Kahan’s (2013; Kahan, Peters et al. 2012; Kahan et al. 2017) phrase for the conundrum that results from a society full of people gaining utility from rationally processing evidence with a myside bias, but ultimately losing more than they gain, because society overall would be better off if public policy was objective and based on what was actually true.

When everyone processes with a myside bias, the result is a society that cannot converge on the truth. Kahan’s phrase derives from Garrett Hardin’s (1968) famous tragedy of the commons argument, which itself derives from the much-studied prisoner’s dilemma paradigm. The generic situation is one in which, from an individual perspective, the defection response dominates the cooperative response, but if both players make the individually rational response, the payoff for both is low.

Kahan (2013; Kahan et al. 2017) saw the same logic applying in the domain of the communication of relevant public-policy information. My analyses in chapter 2 positively sanction much myside reasoning, but all of this normatively appropriate individual mysided reasoning has resulted in a fractious and politically divided society that cannot seem to agree on the most basic facts about a host of public policy issues. In the book, I argue that a solution to the problem would have been easier to implement if my analysis of normative considerations had pointed to myside bias as a clearly irrational processing tendency—one that was suboptimal at the level of the individual. We then would have the rationale for mounting educational programs to rid people of the tendency. Unfortunately, I was forced to argue that:

It is also unusually hard to show that most myside processing is normatively inappropriate. Thinking in ways that bolster one’s own group or social connections seems, even in the modern day, to have many instrumental advantages. Even when we choose to focus strictly on epistemic rationality, it appears that much of our myside processing is warranted (125).

This summary is very inconsistent with the charge that I “insist that myside bias is often irrational.” Instead, my argument is that myside bias leads to a commons dilemma. My stance is that it hurts us much more at the societal level than the individual level. In the final chapter of the book I make some systemic recommendations for remedying the commons dilemma by damping down the tendency to project our worldviews onto new evidence.

When I list my prescriptions for avoiding myside bias in chapter 6 (e.g. recognize that you have conflicting values) it is to get people to cooperate in a prisoner’s dilemma—it is to get them to quit defecting and to begin cooperating. This means that I am admitting that their defection is individually rational (since defection is the dominant response in the prisoner’s dilemma). I am urging them to cooperate and not to project a worldview onto evidence in order to attain a better group outcome. But I am not challenging the point that such projection may well be individually rational. The recommendations are all aimed at avoiding the tragedy of the science communication commons by damping down the kind of myside bias that prevents truth convergence. This contrasts with remediation efforts in the heuristics and biases literature, whereby for most every other bias (and there are many) there are individual rationality gains—the result of the training is to make the individual respond in a more normative manner. Remediation of the negative effects of the other biases does not involve dealing with a social dilemma. Myside bias is the opposite. The gains are at the societal level rather than at the individual level.

Finally, there are several sections of Levy’s review where he seems to imply that there is some difference between us when there really is not. For example, he makes the point that rationality and accuracy can dissociate in certain cases and that “Bayesian reasoning may lead different agents rationally to diverge in response to one and the same set of evidence.” That Bayesian reasoning may lead different observers to rationally diverge in their response to the same evidence was precisely the point of my extremely lengthy discussion of the papers of Koehler (1993) and Jern et al. (2014) in chapter 2. So that is not a difference in our views. And, additionally, his statement that “pointing out that being guided by our convictions is guaranteed sometimes to lead us astray does not show that we’re irrational to be so guided” contradicts nothing that I have said in the book. Previously in this reply I acknowledged that convictions are almost always appropriate as an input to the determination of the prior probability and in the book it is clear that if convictions (as well as scientific consensus) are derived from evidence that they are also sometimes appropriately projected on the likelihood ratio. So in parts of the review, Levy is unnecessarily seeing differences—he and I are not that far apart.

Academic Psychology Fuels the Tragedy of the Communication Commons

I would reiterate my criticisms of academic psychology here in terms of the institutional recommendations that I make in the book. If we are to avoid the tragedy of the science communication commons in society, our institutions must act to break the prisoners dilemma logic of myside bias. Kahan (2016; Kahan et al. 2017) has written eloquently on how we need to decontaminate the discussion of public policies from the poisonous effects of conviction-based myside reasoning. He has argued that we need strong institutions that form a barrier between evidence evaluation and the projection of convictions. But sadly, those institutions—most notably, the media and universities—have failed us in the early 21st century.

In my own discipline, psychology, we teach every undergraduate the logic of how the decontamination is supposed to work. We tell them that science works so well not because scientists themselves are uniquely virtuous (i.e., that they are never biased), but because scientists are immersed in a system of checks and balances—where other scientists with differing biases are there to critique and correct. The bias of investigator A might not be shared by investigator B who will then look at A’s results with a skeptical eye. Likewise, when investigator B presents a result, investigator A will tend to be critical and look at it with a skeptical eye. That’s what we tell the students.

But we often don’t tell them that the demographics of psychology have evolved away from the situation that makes myside crosschecking possible. Myside crosschecking simply doesn’t work when all of the investigators share exactly the same bias. Unfortunately, that is the present situation in academic psychology. Numerous surveys have shown that we are an ideological monoculture (Abrams 2016; Bikales and Goodman 2020; Buss and von Hippel 2018; Cardiff and Klein 2005; Ceci and Williams 2018; Crawford and Jussim 2018; Duarte et al. 2015; Klein and Stern 2005; Langbert 2018; Langbert and Stevens 2020; Lukianoff and Haidt 2018). The pool of investigators is politically homogeneous, and thus we cannot rest assured that there is enough variability in our science to objectively bring myside cross-checking to bear on many charged topics.

It would be a mistake for psychologists to think that there are easy ways around this homogeneity—for instance, that they could just try harder to be objective on an individual basis. An ideological monoculture will not keep psychology honest in this way, because it removes the social milieu of criticism and cross-checking that is essential. It is all too tempting for psychologists to think that they can individually overcome the problem of myside bias—that they can set aside their ideological preferences while doing their science. Ironically, thinking that this is possible would itself be indicative of a well-known bias, the so-called bias blind spot (Pronin 2007; Pronin et al. 2002).

Alas, there is no evidence that the particular type of ideological monoculture that characterizes the social sciences (left/liberal progressivism) is immune to myside bias. A meta-analysis by Ditto et al. (2019) found that myside bias on social and political issues was equally strong on both ends of the ideological spectrum. In fact, the Ditto et al. (2019) findings simply highlight the danger of an academic cognitive elite thinking that they can investigate incendiary political topics on which they themselves have strong feelings without their research being compromised by myside bias. The Ditto et al. (2019) findings show that the particular ideology of the academic cognitive elite is no less prone to myside bias than are the ideologies of the citizens that academics politically oppose. But because of their cognitive ability, and because of their educational backgrounds, the cognitive elites of society will tend to think that their processing of evidence is less driven by myside bias than that of their fellow citizens.

Thus, a combustible brew of facts accounts for the existence of a massive myside bias blind spot among university faculty. Research reviewed in my book has shown that cognitive ability and acquired education is no inoculation against myside bias. The myside bias blind spot in the academy is a recipe for disaster when it comes to studying the psychology of political opponents. The transparency reforms in the open science movement are effective mechanisms for addressing the replication crisis in social science, but they will not create an environment for ideological cross-checking as long as our field remains a monoculture. It’s not just transparency that our field needs, it’s tolerance for other viewpoints. We need to let the other half of the population in.

The ideological uniformity of psychology faculties has led the discipline to become infected with identity politics—a trend which further decreases the ability of the discipline to avoid myside bias in its conclusions. Identity politics entangles many testable propositions with identity-based convictions, transforming positions on policy-relevant facts into badges of group-based convictions that then trigger mysided reasoning.

Obviously, greater intellectual diversity among researchers would be a key corrective, but adversarial collaboration is also critical (Clark and Tetlock, 2022). You can’t study contentious topics properly with a lab full of people who think alike. If you try to do so, you will end up creating scale items that produce correlations that are inaccurate by a factor of two—see Stanovich and Toplak (2019) for a description of how my own lab once did exactly that!

Author Information:

Keith E. Stanovich, keith.stanovich@utoronto.ca, is professor emeritus of applied psychology and human development at the University of Toronto and lives in Portland, Oregon. His latest book is: The Bias That Divides Us: The Science and Politics of Myside Thinking (MIT Press).

References

Abrams, Sam. 2016. “Professors Moved Left Since 1990s, Rest of Country Did Not.” Heterodox Academy. January 9. https://heterodoxacademy.org/professors-moved-left-but-country-did-not/.

Bikales, James S. and Jasper G. Goodman. 2020. “Plurality of Surveyed Harvard Faculty Support Warren in Presidential Race.” Harvard Crimson. March 5. https://www.thecrimson.com/article/2020/3/3/faculty-support-warren-president/#disqus_thread.

Braman, Donald, Dan M. Kahan, Ellen Peters, Maggie Wittlin, Paul Slovic, Lisa Larrimore Ouellette, and Gregory N. Mandel. 2012. “The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks.” Nature Climate Change 2: 732-735.

Buss, David M. and William von Hippel. 2018. “Psychological Barriers to Evolutionary Psychology: Ideological Bias and Coalitional Adaptations.” Archives of Scientific Psychology 6 (1): 148-158.

Cardiff, Christopher F. and Daniel B. Klein. 2005. “Faculty Partisan Affiliations in all Disciplines: A Voter‐Registration Study.” Critical Review 17 (3-4): 237-255.

Ceci, Stephen J. and Wendy M. Williams 2018. “Who Decides What is Acceptable Speech on Campus? Why Restricting Free Speech is not the Answer.” Perspectives on Psychological Science 13 (3): 299-323.

Clark, Cory J. and Philip E. Tetlock. 2021. “Adversarial Collaboration: The Next Science Reform.” In Political Bias in Psychology: Nature, Scope, and Solutions edited by Craig L. Frisby, Richard E. Redding, William T. O’Donohue, and Scott O. Lilienfeld, 2-42. New York: Springer.

Cook, John and Stephan Lewandowsky. 2016. “Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks.” Topics in Cognitive Science 8 (1): 160-179.

Crawford, Jarret T. and Lee Jussim, eds. 2018. The Politics of Social Psychology. New York: Routledge.

Dennett, Daniel C. 2017. From Bacteria to Bach and Back. New York: Norton.

Ditto, Peter H., Brittany S. Liu, Cory J. Clark, Sean P. Wojcik, Eric E. Chen, Rebecca H. Grady, Jared B. Celniker, and Joanne F. Zinger. 2019. “At Least Bias is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives.” Perspectives on Psychological Science 14 (2): 273-291.

Duarte, José L., Jarret T Crawford, Charlotta Stern, Jonathan Haidt, Lee Jussim, and Philip E. Tetlock. 2015. “Political Diversity will Improve Social Psychological Science.” Behavioral and Brain Sciences 38: e130. doi:10.1017/S0140525X14000430.

Gentzkow, Matthew and Jesse M. Shapiro. 2006. “Media Bias and Reputation.” Journal of Political Economy 114 (2): 280-316.

Gershman, Samuel J. 2019. “How to Never be Wrong.” Psychonomic Bulletin and Review 26 (1): 13-28.

Hahn, Ulrike and Adam J.L. Harris. 2014. “What Does it Mean to be Biased: Motivated Reasoning and Rationality.” In Psychology of Learning and Motivation Volume 61 edited by Brian H. Ross, 41-102. Academic Press.

Hardin, Garrett. 1968. “The Tragedy of the Commons: The Population Problem has no Technical Solution; it Requires a Fundamental Extension in Morality” Science 162 (3859): 1243-1248.

Jern, Alan, Kai-min K. Chang, Charles Kemp. 2014. “Belief Polarization is not Always Irrational.” Psychological Review 121 (2): 206-224.

Kahan, Dan M. 2016. “The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning is and how to Measure It.” In Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource edited by Robert A. Scott and Stephen M. Kosslyn. doi:10.1002/9781118900772.etrds0417.

Kahan, Dan M. 2013. “Ideology, Motivated Reasoning, and Cognitive Reflection.” Judgment and Decision Making 8 (4): 407-424.

Kahan, Dan M., Ellen Peters, Erica Dawson, and Paul Slovic. 2017. “Motivated Numeracy and Enlightened Self-Government.” Behavioural Public Policy 1: 54-86.

Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Klein, Daniel B. and Charlotta Stern. 2005. “Professors and their Politics: The Policy Views of Social Scientists.” Critical Review 17 (3-4): 257-303.

Koehler, Jonathan J. 1993. “The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality.” Organizational Behavior and Human Decision Processes 56 (1): 28-55.

Krueger, Joachim I. forthcoming. “Twilight of Rationality.” American Journal of Psychology.

Langbert, Mitchell. 2018. “Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty.” Academic Questions 31 186-197.

Langbert, Mitchell and Sean Stevens. 2020. “Partisan Registration and Contributions of Faculty in Flagship Colleges.” National Association Of Scholars. https://www.nas.org/blogs/article/partisan-registration-and-contributions-of-faculty-in-flagship-colleges.

Levy, Neil. 2021. “Is Myside Bias Irrational? A Biased Review of The Bias that Divides Us.” Social Epistemology Review and Reply Collective 10 (10): 31-38.

Lukianoff, Greg and Jonathan Haidt. 2018. The Coddling of the American Mind. New York: Penguin.

Pronin, Emily. 2007. “Perception and Misperception of Bias in Human Judgment.” Trends in Cognitive Sciences 11 (1): 37-43.

Pronin, Emily, Daniel Y. Lin, and Lee Ross. 2002. “The Bias Blind Spot: Perceptions of Bias in Self Versus Others.” Personality and Social Psychology Bulletin 28 (3): 369-381.

Stanovich, Keith E. 2011. Rationality and the Reflective Mind. New York: Oxford University Press.

Stanovich, Keith E. 2004. The Robot’s Rebellion: Finding Meaning in the Age Of Darwin. Chicago: University of Chicago Press.

Stanovich, Keith E. 1999. Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Erlbaum.

Stanovich, Keith E., Richard F. West, and Maggie E. Toplak. 2016. The Rationality Quotient: Toward a Test of Rational Thinking. Cambridge, MA: MIT Press.

Tappin, Ben M., Gordon Pennycook, and David G. Rand. 2020. “Thinking Clearly about Causal Inferences of Politically Motivated Reasoning.” Current Opinion in Behavioral Sciences 34: 81-87.



Categories: Books and Book Reviews, Comments

Tags: , , , , , , , , , , , ,

Leave a Reply