Misinformation and the Limits of Individual Responsibility, Part Two, Boyd Millar

3. Some Alternative Proposals

If it isn’t reasonable to expect individuals to avoid social media altogether, and if merely supplementing your social media use with high-quality news consumption doesn’t largely protect you from the negative impacts of misinformation, then perhaps individuals ought to take certain preventative measures before using social media. For instance, perhaps individuals ought to cultivate the sort of cognitive traits that would enable them to remain largely unaffected by the misinformation they encounter on social media; or perhaps individuals ought to employ certain strategies or techniques while consuming social media to avoid being taken in by the falsehoods they come across … [please read below the rest of the article].

Image credit: Lauren Walker / Truthout via Flickr / Creative Commons

Article Citation:

Millar, Boyd. 2021. “Misinformation and the Limits of Individual Responsibility.” Social Epistemology Review and Reply Collective 10 (12): 8-21. https://wp.me/p1Bfg0-6kD.

🔹 The PDF of the article gives specific page numbers.

Editor’s Note: Boyd Millar’s article will be presented online in two parts. Below, please find Part Two. Please refer to Part One of the article. The PDF includes the entire article.

This article replies to:

❧ Croce, Michel and Tommaso Piazza. 2021. “Consuming Fake News: Can We Do Any Better?” Social Epistemology. 1–10. https://doi.org/10.1080/02691728.2021.1949643.

Articles in this dialogue:

❦ Coady, David. 2019. “The Trouble With ‘Fake News’.” Social Epistemology Review and Reply Collective 8 (10): 40–52.

❦ Millar, Boyd. 2019. “The Information Environment and Blameworthy Belief.” Social Epistemology 33 (6): 525–537.

❦ Heersmink, Richard. 2018. “A Virtue Epistemology of the Internet: Search Engines, Intellectual Virtues and Education.” Social Epistemology 32 (1): 1–12.

Some recent research on epistemic vices suggests a proposal of the former sort. Meyer, Alfano, and de Bruin recently found that certain epistemic vices—in particular, indifference, or “lack of motivation to find the truth,” and rigidity, or “insensitivity to evidence”—are strongly correlated with acceptance of COVID-19 misinformation (2021, 5-6). The epistemic vices at issue were measured using the authors’ Epistemic Vice Scale: for instance, indifference is measured by the extent to which subjects agree with statements such as “I am not very interested in understanding things,” and rigidity is measured by the extent to which subjects agree with statements such as “I often have strong opinions about issues I don’t know much about,” and “I tend to feel sure about my views even if I don’t have much evidence” (2021, 5). Meyer, Alfano, and de Bruin discovered that individuals who received the highest scores on the Epistemic Vice Scale were far more likely to believe COVID-19 misinformation than individuals who received lower scores (2021, 14). In fact, they found that epistemic vice was more strongly correlated with susceptibility to COVID-19 misinformation than were any other demographic or psychological traits, such as age, political affiliation, need for cognition, performance on the Cognitive Reflection Test, and so on (2021, 14-16).

Given this strong correlation, one might conclude that individuals’ doxastic attitudes are negatively impacted by the misinformation they consume on social media because they exhibit the sorts of epistemic vices at issue. And if so, this fact would suggest a method that social media users might reasonably be expected to employ in order to protect themselves from the negative impacts of misinformation: individuals should take steps to eliminate or mitigate the relevant epistemic vices. In other words, if individuals believe the misinformation they encounter on social media because they exhibit indifference and rigidity, then a simple method they could use to protect themselves from misinformation would be to become less indifferent and less rigid. So, if there are strategies one can use to reduce one’s epistemic vices, then the present research suggests promising educational remedies: perhaps individuals can be protected from misinformation by reducing or eliminating their epistemic vices with the right kind of training.[1]

However, there are at least three significant difficulties with this proposal. First and foremost, Meyer, Alfano, and de Bruin’s research only establishes a correlation between epistemic vice and accepting particularly implausible misinformation. Their study utilized falsehoods concerning COVID-19 such as “adding pepper to your meals prevents COVID-19,” “5G mobile networks spread COVID-19,” and “exposing yourself to the sun or to temperatures higher than 77°F prevents the coronavirus disease” (2021, 9). These claims were taken from a World Health Organization “myth-busting” webpage, so presumably they had spread widely online; but by no means are these claims representative of the kind of misinformation that reaches the most people on social media and has the biggest impact on what people believe.

For instance, the present research provides no reason to think that becoming less indifferent and rigid would prevent someone from believing that the COVID-19 vaccines are dangerous and ineffective, or that voter fraud was rampant in the 2020 Presidential election. Consequently, even if there were some method that individuals could reasonably be expected to employ that would eliminate their epistemic vices, doing so would not largely protect them from much of the most consequential misinformation that they are likely to encounter while using social media.

A second difficulty is that Meyer, Alfano, and de Bruin’s research focuses exclusively on belief in misinformation. But, as we’ve already seen (§2), the falsehoods you are exposed to have a systematic negative impact on the accuracy of your doxastic attitudes even when you don’t believe them. So, even if you happen to be extremely low with respect to indifference and rigidity, and even if you recognize that a particular piece of misinformation is indeed false, your grip on the truth will still be undermined each time you are exposed to it. And, moreover, your lack of epistemic vices will not prevent the misinformation you encounter on social media from counteracting the benefits that exposure to accurate information would otherwise have.

A third difficulty is that it’s not clear whether there are any effective methods via which individuals could eliminate the relevant epistemic vices. Individuals with the vices at issue are the kind of people who freely admit “I often have strong opinions about issues I don’t know much about,” and “I tend to feel sure about my views even if I don’t have much evidence.” What kind of training, or therapy, could convince such a person that, in fact, one should only believe what one’s evidence supports? Presumably individuals could be prevented from becoming indifferent and rigid with the right sort of guidance; but it’s not obvious that someone who already possesses these vices will be able (and motivated) to change such perverse views of permissible epistemic conduct. And so, if these vices can’t be eliminated, then—even if we were to grant that eliminating the relevant epistemic vices would protect individuals from most misinformation—individuals don’t have a method they can reasonably be expected to employ that would largely protect them from the negative impact of misinformation.

Alternatively, one might think that, in order to protect themselves from misinformation, individuals needn’t dramatically overhaul their firmly-established character traits—perhaps they simply need to exercise more care or vigilance when consuming information via social media. Recent research by Gordon Pennycook and colleagues suggests a proposal of this sort. For instance, Pennycook and Rand (2019) found that individuals who are more prone to engage in the right sort of reflection—as measured by performance on the Cognitive Reflection Test—are better able to distinguish accurate news stories from stereotypical fake news stories. And Bago, Rand, and Pennycook (2020) found that when individuals take the time to engage in deliberation, they are better able to identify fake news headlines as inaccurate. Summarizing a significant quantity of related research, Pennycook and Rand maintain that one of the principal reasons that individuals fail to recognize that fake news stories are false is that “they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge)” (2021, 393).

Such research, then, suggests a method that social media users might reasonably be expected to employ in order to protect themselves from the negative impacts of misinformation: when consuming information via social media, individuals should engage in sufficient reflection and deliberation—they should be careful to bring their stored knowledge to bear as a kind of misinformation filter. If such a method were able to protect individuals from misinformation, then promising educational remedies might exist: for instance, Pennycook and Rand suggest that educational “interventions that are directed at making the public more thoughtful consumers of news media may have promise” (2019, 48). More generally, one might think that critical thinking training might successfully encourage large numbers of people to engage in the requisite sort of reflection while using social media.[2]

However, there are at least three significant difficulties with this proposal. The first is that, once again, the research at issue is focused exclusively on belief in misinformation. Even if we grant that there are methods by which individuals can increase their tendency to engage in the relevant sort of reflection while using social media, the best outcome we could expect is that they would end up believing fewer false stories than they would have otherwise. But, again, the falsehoods we encounter have systematic negative impacts on the accuracy of our doxastic attitudes even when we avoid believing them.

A second difficulty is that it’s not clear whether there are any effective methods via which individuals can increase their tendency to engage in the relevant sort of reflection. It seems plausible that educational interventions could be designed that train individuals to take the time to reflect and deliberate while consuming information. But whether any such intervention will ultimately be effective is an open empirical question—at present there simply isn’t good evidence that educational interventions improve critical thinking abilities over the long term.[3] (As Pennycook and Rand (2021, 397) explain, there is considerable evidence that exposing individuals to “accuracy prompts”—nudges that remind social media users to focus their attention on accuracy—significantly improves the rate at which they identify fake news stories. But such an intervention is a structural rather than an educational remedy—the purpose is to make the information environment safer to navigate, not to bring about relatively long-term changes to individuals’ behavior or cognitive traits.)

Finally, the most fundamental difficulty with the present proposal is that reflection and deliberation can only hope to protect you from a small fraction of the misinformation you are likely to encounter on social media. Taking the time to reflect on your existing knowledge while consuming social media will only protect you from misinformation so long as the relevant false claims are inconsistent with what you already know. But, first, in many cases, you simply won’t possess any relevant beliefs. Some of the most consequential misinformation that circulates widely on social media concerns issues that most people don’t know much about; for instance, most people don’t know much about the science of COVID-19 vaccines, or the details of election security, and so they can’t filter out many false claims concerning these topics. And, second, in many cases, the misinformation you encounter will be consistent with your existing beliefs (sometimes because your relevant existing beliefs will be false). For instance, research has shown that individuals are much more likely to judge that a stereotypical fake news story is accurate when it conforms to their political commitments: such falsehoods seem more plausible to someone with the relevant background beliefs.[4]

This point is particularly important in the present context: thanks to personalization algorithms, when you use social media, you will regularly encounter misinformation that is consistent with your existing beliefs. Accordingly, the proposed method—stopping to reflect and deliberate while consuming social media—can only prevent individuals from believing misinformation that happens to be incompatible with true beliefs they already possess; and so, this method offers individuals no protection against a significant proportion of the misinformation they will encounter on social media.

4. Conclusion

The question we started with was: are there methods that most individuals can reasonably be expected to employ that would largely protect them from the negative impact that encountering misinformation on social media would otherwise have on their doxastic attitudes? We’ve now seen that there are compelling reasons to conclude that the answer is “no.” There are methods available to social media users that would likely yield improvements: if you consume more high-quality traditional news content, suppress certain epistemic vices, or take the time to reflect and deliberate while consuming social media content, you will likely end up with fewer false beliefs than you would have otherwise. But such improvements would be marginal at best. Whatever else you do, so long as you regularly utilize social media, you will be repeatedly exposed to misinformation that is strongly consistent with what you already believe, and that is endorsed by individuals and groups you admire and trust. And, as such, you are likely to end up believing at least some portion of this misinformation; and the misinformation that you avoid believing will still have a systematic negative impact on the accuracy of your doxastic attitudes. No ordinary human being can inhabit such an information environment without suffering significant harm.

Accordingly, any educational intervention designed to encourage large numbers of people to employ one of the aforementioned methods would likely have only a minor impact—and so would not be worth the expense and effort. So, while educational approaches are compatible with structural approaches, and are likely to do some good, we have independent reasons for concluding that they are not worth pursuing. Instead, in order to effectively combat the negative impacts of misinformation distributed via social media, rather than focus on individuals, we ought to focus on the information environment itself. In particular, we ought to focus on whether we can identify (morally and politically acceptable) modifications to social media platforms that would significantly limit the distribution of misinformation, or that would minimize the impact misinformation has on ordinary users.

References

Avaaz. 2020. “How Facebook Can Flatten the Curve of the Coronavirus Infodemic.” Avaaz Report, April 15, 2020. https://avaazimages.avaaz.org/facebook_ coronavirus_misinformation.pdf.

Avaaz. 2021. “Facebook: From Election to Insurrection.” Avaaz Report, March 18, 2021. https://avaazimages.avaaz.org/facebook_election_insurrection.pdf.

Bago, Bence, David Rand, and Gordon Pennycook. 2020. “Fake News, Fast and Slow: Deliberation Reduces Belief in False (but Not True) News Headlines.” Journal of Experimental Psychology: General 149 (8): 1608–1613.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press.

Benkler, Yochai, Casey Tilton, Bruce Etling, Hal Roberts, Justin Clark, Robert Faris, Jonas Kaiser, and Carolyn Schmitt. 2020. “Mail-In Voter Fraud: Anatomy of a Disinformation Campaign.” Berkman Center Research Publication No. 2020-6. https://cyber.harvard.edu/publication/2020/Mail-in-Voter-Fraud-Disinformation-2020.

Boykoff, Maxwell and Jules Boykoff. 2004. “Balance as Bias: Global Warming and the US Prestige Press.” Global Environmental Change 14 (2): 125–136.

Bradshaw, Samantha, Philip Howard, Bence Kollanyi, and Lisa-Maria Neudert. 2020. “Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018.” Political Communication 37 (2): 173–193.

Calvillo, Dustin, Bryan Ross, Ryan Garcia, Thomas Smelter, and Abraham Rutchick. 2020. “Political Ideology Predicts Perceptions of the Threat of COVID-19 (and Susceptibility to Fake News About It).” Social Psychological and Personality Science 11 (8): 1119–1128.

Coady, David. 2019. “The Trouble With ‘Fake News’.” Social Epistemology Review and Reply Collective 8 (10): 40–52.

Croce, Michel and Tommaso Piazza. 2021. “Consuming Fake News: Can We Do Any Better?” Social Epistemology. 1–10. https://doi.org/10.1080/02691728.2021.1949643.

El Soufi, Nada and Beng Huat See. 2019. “Does Explicit Teaching of Critical Thinking Improve Critical Thinking Skills of English Language Learners in Higher Education? A Critical Review of Causal Evidence.” Studies in Educational Evaluation 60: 140–162.

Fazio, Lisa, Nadia Brashier, B. Keith Payne, and Elizabeth Marsh. 2015. “Knowledge Does Not Protect Against Illusory Truth.” Journal of Experimental Psychology: General 144 (5): 993–1002.

Fischer, Sara. 2018. “92% of Republicans Think Media Intentionally Reports Fake News.” Axios, June 27, 2018. https://www.axios.com/trump-effect-92-percent-republicans-media-fake-news-9c1bbf70-0054-41dd-b506-0869bb10f08c.html.

Gordon, Andrew, Susanne Quadflieg, Jonathan Brooks, Ullrich Ecker, and Stephan Lewandowsky. 2019. “Keeping Track of ‘Alternative Facts’: The Neural Correlates of Processing Misinformation Corrections.” NeuroImage 193: 46–56.

Heersmink, Richard. 2018. “A Virtue Epistemology of the Internet: Search Engines, Intellectual Virtues and Education.” Social Epistemology 32 (1): 1–12.

Jerit, Jennifer and Yangzi Zhao. 2020. “Political Misinformation.” Annual Review of Political Science 23:77–94.

Lewandowsky, Stephan, Ullrich Ecker, Colleen Seifert, Norbert Schwarz, and John Cook. 2012. “Misinformation and Its Correction: Continued Influence and Successful Debiasing.” Psychological Science in the Public Interest 13 (3): 106–131.

Loomba, Sahil, Alexandre de Figueiredo, Simon Piatek, Kristen de Graaf, and Heidi Larson. 2021. “Measuring the Impact of COVID-19 Vaccine Misinformation on Vaccination Intent in the UK and USA.” Nature Human Behaviour 5: 337–348.

Mercier, Hugo. 2017. “How Gullible Are We? A Review of the Evidence from Psychology and Social Science.” Review of General Psychology 21 (2): 103–122.

Merkley, Eric. 2020. “Are Experts (News)Worthy? Balance, Conflict, and Mass Media Coverage of Expert Consensus.” Political Communication 37 (4): 530–549.

Meyer, Marco, Mark Alfano, and Boudewijn de Bruin. 2021. “Epistemic Vice Predicts Acceptance of Covid-19 Misinformation.” Episteme. 1–22.

Millar, Boyd. 2019. “The Information Environment and Blameworthy Belief.” Social Epistemology 33 (6): 525–537.

Morning Consult/Politico. 2021. “National Tracking Poll #2110134.” https://www.politico.com/f/?id=0000017c-c00e-d3c9-a77c-cb9eab6a0000&nname=playbook&nid=0000014f-1646-d88f-a1cf-5f46b7bd0000&nrid=0000014e-f115-dd93-ad7f-f91513e50001&nlid=630318.

Newman, Nic, Richard Fletcher, Anne Schulz, Simge Andı, Craig T. Robertson, and Rasmus Kleis Nielsen. 2021. “Reuters Institute Digital News Report 2021.” https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Digital_News_Report_2021_FINAL.pdf.

Pennycook, Gordon and David Rand. 2021. “The Psychology of Fake News.” Trends in Cognitive Sciences 25 (5): 388–402.

Pennycook, Gordon and David Rand. 2019. “Lazy, Not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188: 39–50.

Pennycook, Gordon, Tyrone Cannon, and David Rand. 2018. “Prior Exposure Increases Perceived Accuracy of Fake News.” Journal of Experimental Psychology: General 147 (12): 1865–1880.

Porter, Ethan and Thomas Wood. 2019. False Alarm: The Truth about Political Mistruths in the Trump Era. Cambridge: Cambridge UP.

Sindermann, Cornelia, Andrew Cooper, and Christian Montag. 2020. “A Short Review on Susceptibility to Falling for Fake Political News.” Current Opinion in Psychology 36: 44–48.

Swire, Briony, Adam Berinsky, Stephan Lewandowsky, and Ullrich Ecker. 2017. “Processing Political Misinformation: Comprehending the Trump Phenomenon.” Royal Society Open Science 4 (3): 160802.

Thorson, Emily. 2016. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” Political Communication 33: 460–480.van der Linden, Sander, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach. 2017. “Inoculating the Public against Misinformation about Climate Change.” Global Challenges 1: 1600008.

Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359: 1146–1151.

Walter, Nathan, Jonathan Cohen, R. Lance Holbert, and Yasmin Morag. 2020. “Fact-Checking: A Meta-Analysis of What Works and for Whom.” Political Communication 37 (3): 350–375.


[1] Meyer, Alfano, and de Bruin endorse only a more qualified conclusion along these lines: they suggest that if epistemic vice plays a role in the acceptance of misinformation, and “if epistemic vice can be countered using educational or other interventions, then the public health response to COVID-19 may be bolstered by this line of research” (2021, 2-3).

[2] See Heersmink (2018, §5).

[3] See El Soufi and See (2019).

[4] See Pennycook, Cannon, and Rand (2018, 1875), and Pennycook and Rand (2019, 47).



Categories: Critical Replies

Tags: , , , , , , , , ,

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading