Archives For cognitive dissonance

Technology and Evil, Brian Martin

SERRC —  January 31, 2019 — 2 Comments

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Technology and Evil.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 1-14.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-466

A Russian Mil Mi-28 attack helicopter.
Image by Dmitri Terekhov via Flickr / Creative Commons

 

Humans cause immense damage to each other and to the environment. Steven James Bartlett argues that humans have an inbuilt pathology that leads to violence and ecosystem destruction that can be called evil, in a clinical rather than a religious sense. Given that technologies are human constructions, it follows that technologies can embody the same pathologies as humans. An important implication of Bartlett’s ideas is that studies of technology should be normative in opposing destructive technologies.

Introduction

Humans, individually and collectively, do a lot of terrible things to each other and to the environment. Some obvious examples are murder, torture, war, genocide and massive environmental destruction. From the perspective of an ecologist from another solar system, humans are the world’s major pestilence, spreading everywhere, enslaving and experimenting on a few species for their own ends, causing extinctions of numerous other species and destroying the environment that supports them all.

These thoughts suggest that humans, as a species, have been causing some serious problems. Of course there are many individuals and groups trying to make the world a better place, for example campaigning against war and environmental degradation, and fostering harmony and sustainability. But is it possible that by focusing on what needs to be done and on the positives in human nature, the seriousness of the dark side of human behaviour is being neglected?

Here, I address these issues by looking at studies of human evil, with a focus on a book by Steven Bartlett. With this foundation, it is possible to look at technology with a new awareness of its deep problems. This will not provide easy solutions but may give a better appreciation of the task ahead.

Background

For decades, I have been studying war, ways to challenge war, and alternatives to military systems (e.g. Martin, 1984). My special interest has been in nonviolent action as a means for addressing social problems. Along the way, this led me to read about genocide and other forms of violence. Some writing in the area refers to evil, addressed from a secular, scientific and non-moralistic perspective.

Roy Baumeister (1997), a prominent psychologist, wrote a book titled Evil: Inside Human Violence and Cruelty, that I found highly insightful. Studying the psychology of perpetrators, ranging from murderers and terrorists to killers in genocide, Baumeister concluded that most commonly they feel justified in their actions and see themselves as victims. Often they think what they’ve done is not that important. Baumeister’s sophisticated analysis aims to counter the popular perception of evil-doers as malevolent or uncaring.

Baumeister is one of a number of psychologists willing to talk about good and evil. If the word evil feels uncomfortable, then substitute “violence and cruelty,” as in the subtitle of Baumeister’s book, and the meaning is much the same. It’s also possible to approach evil from the viewpoint of brain function, as in Simon Baron-Cohen’s (2011) The Science of Evil: On Empathy and the Origins of Cruelty. There are also studies that combine psychiatric and religious perspectives, such as M. Scott Peck’s (1988) People of the Lie: The Hope for Healing Human Evil.

Another part of my background is technology studies, including being involved in the nuclear power debate, studying technological vulnerability, communication technology, and technology and euthanasia, among other topics. I married my interests in nonviolence and in technology by studying how technology could be designed and used for nonviolent struggle (Martin, 2001).

It was with this background that I encountered Steven James Bartlett’s (2005) massive book The Pathology of Man: A Study of Human Evil. Many of the issues it addresses, for example genocide and war, were familiar to me, but his perspective offered new and disturbing insights. The Pathology of Man is more in-depth and far-reaching than other studies I had encountered, and is worth bringing to wider attention.

Here, I offer an abbreviated account of Bartlett’s analysis of human evil. Then I spell out ways of applying his ideas to technology and conclude with some possible implications.

Bartlett on Evil

Steven James Bartlett is a philosopher and psychologist who for decades studied problems in human thinking. The Pathology of Man was published in 2005 but received little attention. This may partly be due to the challenge of reading an erudite 200,000-word treatise but also partly due to people being resistant to Bartlett’s message, for the very reasons expounded in his book.

In reviewing the history of disease theories, Bartlett points out that in previous eras a wide range of conditions were considered to be diseases, ranging from “Negro consumption” to anti-Semitism. This observation is part of his assessment of various conceptions of disease, relying on standard views about what counts as disease, while emphasising that judgements made are always relative to a framework that is value-laden.

This is a sample portion of Bartlett’s carefully laid out chain of logic and evidence for making a case that the human species is pathological, namely characteristic of a disease. In making this case, he is not speaking metaphorically but clinically. The fact that the human species has seldom been seen as pathological is due to humans adopting a framework that exempts themselves from this diagnosis, which would be embarrassing to accept, at least for those inclined to think of humans as the apotheosis of evolution.

Next stop: the concept of evil. Bartlett examines a wide range of perspectives, noting that most of them are religious in origin. In contrast, he prefers a more scientific view: “Human evil, in the restricted and specific sense in which I will use it, refers to apparently voluntary destructive behavior and attitudes that result in the general negation of health, happiness, and ultimately of life.” (p. 65) In referring to “general negation,” Bartlett is not thinking of a poor diet or personal nastiness but of bigger matters such as war, genocide and overpopulation.

Bartlett is especially interested in the psychology of evil, and canvasses the ideas of classic thinkers who have addressed this issue, including Sigmund Freud, Carl Jung, Karl Menninger, Erich Fromm and Scott Peck. This detailed survey has only a limited return: these leading thinkers have little to say about the origins of evil and what psychological needs it may serve.

So Bartlett turns to other angles, including Lewis Fry Richardson’s classic work quantifying evidence of human violence, and research on aggression by ethologists, notably Konrad Lorenz. Some insights come from this examination, including Richardson’s goal of examining human destructiveness without emotionality and Lorenz’s point that humans, unlike most other animals, have no inbuilt barriers to killing members of their own species.

Bartlett on the Psychology of Genocide

To stare the potential for human evil in the face, Bartlett undertakes a thorough assessment of evidence about genocide, seeking to find the psychological underpinning of systematic mass killings of other humans. He notes one important factor, a factor not widely discussed or even admitted: many humans gain pleasure from killing others. Two other relevant psychological processes are projection and splitting. Projection involves denying negative elements of oneself and attributing them to others, for example seeing others as dangerous, thereby providing a reason for attacking them: one’s own aggression is attributed to others.

Splitting involves dividing one’s own grandiose self-conception from the way others are thought of. “By belonging to the herd, the individual gains an inflated sense of power, emotional support, and connection. With the feeling of group-exaggerated power and puffed up personal importance comes a new awareness of one’s own identity, which is projected into the individual’s conception” of the individual’s favoured group (p. 157). As a member of a group, there are several factors that enable genocide: stereotyping, dehumanisation, euphemistic language and psychic numbing.

To provide a more vivid picture of the capacity for human evil, Bartlett examines the Holocaust, noting that it was not the only or most deadly genocide but one, partly due to extensive documentation, that provides plenty of evidence of the psychology of mass killing.

Anti-Semitism was not the preserve of the Nazis, but existed for centuries in numerous parts of the world, and indeed continues today. The long history of persistent anti-Semitism is, according to Bartlett, evidence that humans need to feel prejudice and to persecute others. But at this point there is an uncomfortable finding: most people who are anti-Semitic are psychologically normal, suggesting the possibility that what is normal can be pathological. This key point recurs in Bartlett’s forensic examination.

Prejudice and persecution do not usually bring sadness and remorse to the victimizers, but rather a sense of strengthened identity, pleasure, self-satisfaction, superiority, and power. Prejudice and persecution are Siamese twins: Together they generate a heightened and invigorated belief in the victimizers’ supremacy. The fact that prejudice and persecution benefit bigots and persecutors is often overlooked or denied. (p. 167)

Bartlett examines evidence about the psychology of several groups involved in the Holocaust: Nazi leaders, Nazi doctors, bystanders, refusers and resisters. Nazi leaders and doctors were, for the most part, normal and well-adjusted men (nearly all were men). Most of the leaders were above average intelligence, and some had very high IQs, and many of them were well educated and culturally sophisticated. Cognitively they were superior, but their moral intelligence was low.

Bystanders tend to do nothing due to conformity, lack of empathy and low moral sensibility. Most Germans were bystanders to Nazi atrocities, not participating but doing nothing to oppose them.

Next are refusers, those who declined to be involved in atrocities. Contrary to usual assumptions, in Nazi Germany there were few penalties for refusing to join killings; it was just a matter of asking for a different assignment. Despite this, of those men called up to join killing brigades, very few took advantage of this option. Refusers had to take some initiative, to think for themselves and resist the need to conform.

Finally, there were resisters, those who actively opposed the genocide, but even here Bartlett raises a concern, saying that in many cases resisters were driven more by anger at offenders than empathy with victims. In any case, in terms of psychology, resisters were the odd ones out, being disengaged with the dominant ideas and values in their society and being able to be emotionally alone, without peer group support. Bartlett’s concern here meshes with research on why people join contemporary social movements: most first become involved via personal connections with current members, not because of moral outrage about the issue (Jasper, 1997).

The implication of Bartlett’s analysis of the Holocaust is that there is something wrong with humans who are psychologically normal (see also Bartlett, 2011, 2013). When those who actively resist genocide are unusual psychologically, this points to problems with the way most humans think and feel.

Another one of Bartlett’s conclusions is that most solutions that have been proposed to the problem of genocide — such as moral education, cultivating acceptance and respect, and reducing psychological projection — are vague, simplistic and impractical. They do not measure up to the challenge posed by the observed psychology of genocide.

Bartlett’s assessment of the Holocaust did not surprise me because, for one of my studies of tactics against injustice (Martin, 2007), I read a dozen books and many articles about the 1994 Rwandan genocide, in which between half a million and a million people were killed in the space of a few months. The physical differences between the Tutsi and Hutu are slight; the Hutu killers targeted both Tutsi and “moderate” Hutu. It is not widely known that Rwanda is the most Christian country in Africa, yet many of the killings occurred in churches where Tutsi had gone for protection. In many cases, people killed neighbours they had lived next to for years, or even family members. The Rwandan genocide had always sounded horrific; reading detailed accounts to obtain examples for my article, I discovered it was far worse than I had imagined (Martin, 2009).

After investigating evidence about genocide and its implications about human psychology, Bartlett turns to terrorism. Many of his assessments accord with critical terrorism studies, for example that there is no standard definition of terrorism, the fear of terrorism is disproportionate to the threat, and terrorism is “framework-relative” in the sense that calling someone a terrorist puts you in opposition to them.

Bartlett’s interest is in the psychology of terrorists. He is sceptical of the widespread assumption that there must be something wrong with them psychologically, and cites evidence that terrorists are psychologically normal. Interestingly, he notes that there are no studies comparing the psychologies of terrorists and soldiers, two groups that each use violence to serve a cause. He also notes a striking absence: in counterterrorism writing, no one has studied the sorts of people who refuse to be involved in cruelty and violence and who are resistant to appeals to in-group prejudice, which is usually called loyalty or patriotism. By assuming there is something wrong with terrorists, counterterrorism specialists are missing the possibility of learning how to deal with the problem.

Bartlett on War Psychology

Relatively few people are involved in genocide or terrorism except by learning about them via media stories. It is another matter when it comes to war, because many people have lived through a time when their country has been at war. In this century, just think of Afghanistan, Iraq and Syria, where numerous governments have sent troops or provided military assistance.

Bartlett says there is plenty of evidence that war evokes powerful emotions among both soldiers and civilians. For some, it is the time of life when they feel most alive, whereas peacetime can seem boring and meaningless. Although killing other humans is proscribed by most moral systems, war is treated as an exception. There are psychological preconditions for organised killing, including manufacturing differences, dehumanising the enemy, nationalism, group identity and various forms of projection. Bartlett says it is also important to look at psychological factors that prevent people from trying to end wars.

Even though relatively few people are involved in war as combat troops or even as part of the systems that support war-fighting, an even smaller number devote serious effort to trying to end wars. Governments collectively spend hundreds of billions of dollars on their militaries but only a minuscule amount on furthering the causes of peace. This applies as well to research: there is a vastly more military-sponsored or military-inspired research than peace-related research. Bartlett concludes that, “war is a pathology which the great majority of human beings do not want to cure” (p. 211).

Thinking back over the major wars in the past century, in most countries it has been far easier to support war than to oppose it. Enlisting in the military is seen as patriotic whereas refusing military service, or deserting the army, is seen as treasonous. For civilians, defeating the enemy is seen as a cause for rejoicing, whereas advocating an end to war — except via victory — is a minority position.

There have been thousands of war movies: people flock to see killing on the screen, and the bad guys nearly always lose, especially in Hollywood. In contrast, the number of major films about nonviolent struggles is tiny — what else besides the 1982 film Gandhi? — and seldom do they attract a wide audience. Bartlett sums up the implications of war for human psychology:

By legitimating the moral atrocity of mass murder, war, clothed as it is in the psychologically attractive trappings of patriotism, heroism, and the ultimately good cause, is one of the main components of human evil. War, because it causes incalculable harm, because it gives men and women justification to kill and injure one another without remorse, because it suspends conscience and neutralizes compassion, because it takes the form of psychological epidemics in which dehumanization, cruelty, and hatred are given unrestrained freedom, and because it is a source of profound human gratification and meaning—because of these things, war is not only a pathology, but is one of the most evident expressions of human evil. (p. 225)

The Obedient Parasite

Bartlett next turns to obedience studies, discussing the famous research by Stanley Milgram (1974). However, he notes that such studies shouldn’t even be needed: the evidence of human behaviour during war and genocide should be enough to show that most human are obedient to authority, even when the authority is instructing them to harm others.

Another relevant emotion is hatred. Although hating is a widespread phenomenon — most recently evident in the phenomenon of online harassment (Citron, 2014) — Bartlett notes that psychologists and psychiatrists have given this emotion little attention. Hatred serves several functions, including providing a cause, overcoming the fear of death, and, in groups, helping build a sense of community.

Many people recognise that humans are destroying the ecological web that supports their own lives and those of numerous other species. Bartlett goes one step further, exploring the field of parasitology. Examining definitions and features of parasites, he concludes that, according to a broad definition, humans are parasites on the environment and other species, and are destroying the host at a record rate. He sees human parasitism as being reflected in social belief systems including the “cult of motherhood,” infatuation with children, and the belief that other species exist to serve humans, a longstanding attitude enshrined in some religions.

Reading The Pathology of Man, I was tempted to counter Bartlett’s arguments by pointing to the good things that so many humans have done and are doing, such as everyday politeness, altruism, caring for the disadvantaged, and the animal liberation movement. Bartlett could counter by noting it would be unwise to pay no attention to disease symptoms just because your body has many healthy parts. If there is a pathology inherent in the human species, it should not be ignored, but instead addressed face to face.

Remington 1858 Model Navy .36 Cap and Ball Revolver.
Image by Chuck Coker via Flickr / Creative Commons

 

Technologies of Political Control

Bartlett’s analysis of human evil, including that violence and cruelty are perpetrated mostly by people who are psychologically normal and that many humans obtain pleasure out of violence against other humans, can be applied to technology. The aim in doing this is not to demonise particular types or uses of technology but to explore technological systems from a different angle in the hope of providing insights that are less salient from other perspectives.

Consider “technologies of political control,” most commonly used by governments against their own people (Ackroyd et al., 1974; Wright, 1998). These technologies include tools of torture and execution including electroshock batons, thumb cuffs, restraint chairs, leg shackles, stun grenades and gallows. They include technologies used against crowds such as convulsants and infrasound weapons (Omega Foundation, 2000). They include specially designed surveillance equipment.

In this discussion, “technology” refers not just to artefacts but also to the social arrangements surrounding these artefacts, including design, manufacture, and contexts of use. To refer to “technologies of political control” is to invoke this wider context: an artefact on its own may seem innocuous but still be implicated in systems of repression. Repression here refers to force used against humans for the purposes of harm, punishment or social control.

Torture has a long history. It must be considered a prime example of human evil. Few species intentionally inflict pain and suffering on other members of their own species. Among humans, torture is now officially renounced by every government in the world, but it still takes place in many countries, for example in China, Egypt and Afghanistan, as documented by Amnesty International. Torture also takes place in many conventional prisons, for example via solitary confinement.

To support torture and repression, there is an associated industry. Scientists design new ways to inflict pain and suffering, using drugs, loud noises, disorienting lights, sensory deprivation and other means. The tools for delivering these methods are constructed in factories and the products marketed around the world, especially to buyers seeking means to control and harm others. Periodically, “security fairs” are held in which companies selling repression technologies tout their products to potential buyers.

The technology of repression does not have a high profile, but it is a significant industry, involving tens of billions of dollars in annual sales. It is a prime cause of human suffering. So what are people doing about it?

Those directly involved seem to have few moral objections. Scientists use their skills to design more sophisticated ways of interrogating, incarcerating and torturing people. Engineers design the manufacturing processes and numerous workers maintain production. Sales agents tout the technologies to purchasers. Governments facilitate this operation, making extraordinary efforts to get around attempts to control the repression trade. So here is an entire industry built around technologies that serve to control and harm defenceless humans, and it seems to be no problem to find people who are willing to participate and indeed to tenaciously defend the continuation of the industry.

In this, most of the world’s population are bystanders. Mass media pay little attention. Indeed, there are fictional dramas that legitimise torture and, more generally, the use of violence against the bad guys. Most people remain ignorant of the trade in repression technologies. For those who learn about it, few make any attempt to do something about it, for example by joining a campaign.

Finally there are a few resisters. There are groups like the Omega Research Foundation that collect information about the repression trade and organisations like Amnesty International and Campaign Against Arms Trade that campaign against it. Journalists have played an important role in exposing the trade (Gregory, 1995).

The production, trade and use of technologies of repression, especially torture technologies, provide a prime example of how technologies can be implicated in human evil. They illustrate quite a few of the features noted by Bartlett. There is no evidence that the scientists, engineers, production workers, sales agents and politician allies of the industry are anything other than psychologically normal. Indeed, it is an industry organised much like any other, except devoted to producing objects used to harm humans.

Nearly all of those involved in the industry are simply operating as cogs in a large enterprise. They have abdicated responsibility for causing harm, a reflection of humans’ tendency to obey authorities. As for members of the public, the psychological process of projection provides a reassuring message: torture is only used as a last result against enemies such as terrorists. “We” are good and “they” are bad, so what is done to them is justified.

Weapons and Tobacco

Along with the technology of repression, weapons of war are prime candidates for being understood as implicated in evil. If war is an expression of the human potential for violence, then weapons are a part of that expression. Indeed, increasing the capacity of weapons to maim, kill and destroy has long been a prime aim of militaries. So-called conventional weapons include everything from bullets and bayonets to bombs and ballistic missiles, and then there are biological, chemical and nuclear weapons.

Studying weaponry is a way of learning about the willingness of humans to use their ingenuity to harm other humans. Dum-dum bullets were designed to tumble in flight so as to cause more horrendous injuries on exiting a body. Brightly coloured land mines can be attractive to young children. Some of these weapons have been banned, while others take their place. In any case, it is reasonable to ask, what was going through the minds of those who conceived, designed, manufactured, sold and deployed such weapons?

The answer is straightforward, yet disturbing. Along the chain, individuals may have thought they were serving their country’s cause, helping defeat an enemy, or just doing their job and following orders. Indeed, it can be argued that scientific training and enculturation serve to develop scientists willing to work on assigned tasks without questioning their rationale (Schmidt, 2000).

Nuclear weapons, due to their capacity for mass destruction, have long been seen as especially bad, and there have been significant mass movements against these weapons (Wittner, 1993–2003). However, the opposition has not been all that successful, because there continue to be thousands of nuclear weapons in the arsenals of eight or so militaries, and most people seldom think about it. Nuclear weapons exemplify Bartlett’s contention that most people do not do much to oppose war — even a war that would devastate the earth.

Consider something a bit different: cigarettes. Smoking brings pleasure, or at least relief from craving, to hundreds of millions of people daily, at the expense of a massive death toll (Proctor, 2011). By current projections, hundreds of millions of people will die this century from smoking-related diseases.

Today, tobacco companies are stigmatised and smoking is becoming unfashionable — but only in some countries. Globally, there are ever more smokers and ever more victims of smoking-related illnesses. Cigarettes are part of a technological system of design, production, distribution, sales and use. Though the cigarette itself is less complex than many military weapons, the same questions can be asked of everyone involved in the tobacco industry: how can they continue when the evidence of harm is so overwhelming? How could industry leaders spend decades covering up their own evidence of harm while seeking to discredit scientists and public health officials whose efforts threatened their profits?

The answers draw on the same psychological processes involved in the perpetuation of violence and cruelty in more obvious cases such as genocide, including projection and obedience. The ideology of the capitalist system plays a role too, with the legitimating myths of the beneficial effects of markets and the virtue of satisfying consumer demand.

For examining the role of technology in evil, weapons and cigarettes are easy targets for condemnation. A more challenging case is the wide variety of technologies that contribute to greenhouse gas emissions and hence to climate change, with potentially catastrophic effects for future generations and for the biosphere. The technologies involved include motor vehicles (at least those with internal combustion engines), steel and aluminum production, home heating and cooling, and the consumption of consumer goods. The energy system is implicated, at least the part of it predicated on carbon-based fuels, and there are other contributors as well such as fertilisers and clearing of forests.

Most of these technologies were not designed to cause harm, and those involved as producers and consumers may not have thought of their culpability for contributing to future damage to the environment and human life. Nevertheless, some individuals have greater roles and responsibilities. For example, many executives in fossil fuel companies and politicians with the power to reset energy priorities have done everything possible to restrain shifting to a sustainable energy economy.

Conceptualising the Technology of Evil

If technologies are implicated in evil, what is the best way to understand the connection? It could be said that an object designed and used for torture embodies evil. Embodiment seems appropriate if the primary purpose is for harm and the main use is for harm, but seldom is this sort of connection exclusive of other uses. A nuclear weapon, for example, might be used as an artwork, a museum exhibit, or a tool to thwart a giant asteroid hurtling towards earth.

Another option is to say that some technologies are “selectively useful” for harming others: they can potentially be useful for a variety of purposes but, for example, easier to use for torture than for brain surgery or keeping babies warm. To talk of selective usefulness instead of embodiment seems less essentialist, more open to multiple interpretations and uses.

Other terms are “abuse” and “misuse.” Think of a cloth covering a person’s face over which water is poured to give a simulation of drowning, used as a method of torture called waterboarding. It seems peculiar to say that the wet cloth embodies evil given that it is only the particular use that makes it a tool to cause harm to humans. “Abuse” and “misuse” have an ignominious history in the study of technology because they are often based on the assumption that technologies are inherently neutral. Nevertheless, these terms might be resurrected in speaking of the connection between technology and evil when referring to technologies that were not designed to cause harm and are seldom used for that purpose.

Consider next the role of technologies in contributing to climate change. For this, it is useful to note that most technologies have multiple uses and consequences. Oil production, for example, has various immediate environmental and health impacts. Oil, as a product, has multitudinous uses, such as heating houses, manufacturing plastics and fuelling military aircraft. The focus here is on a more general impact via the waste product carbon dioxide that contributes to global warming. In this role, it makes little sense to call oil evil in itself.

Instead, it is simply one player in a vast network of human activities that collectively are spoiling the environment and endangering future life on earth. The facilitators of evil in this case are the social and economic systems that maintain dependence on greenhouse gas sources and the psychological processes that enable groups and individuals to resist a shift to sustainable energy systems or to remain indifferent to the issue.

For climate change, and sustainability issues more generally, technologies are implicated as part of entrenched social institutions, practices and beliefs that have the potential to radically alter or destroy the conditions for human and non-human life. One way to speak of technologies in this circumstance is as partners. Another is to refer to them as actors or actants, along the lines of actor-network theory (Latour, 1987), though this gives insufficient salience to the psychological dimensions involved.

Another approach is to refer to technologies as extensions of humans. Marshall McLuhan (1964) famously described media as “extensions of man.” This description points to the way technologies expand human capabilities. Vehicles expand human capacities for movement, otherwise limited to walking and running. Information and communication technologies expand human senses of sight, hearing and speaking. Most relevantly here, weapons expand human capacities for violence, in particular killing and destruction. From this perspective, humans have developed technologies to extend a whole range of capacities, some of them immediately or indirectly harmful.

In social studies of technology, various frameworks have been used, including political economy, innovation, social shaping, cost-benefit analysis and actor-network theory. Each has advantages and disadvantages, but none of the commonly used frameworks emphasises moral evaluation or focuses on the way some technologies are designed or used for the purpose of harming humans and the environment.

Implications

The Pathology of Man is a deeply pessimistic and potentially disturbing book. Probing into the psychological foundations of violence and cruelty shows a side of human behaviour and thinking that is normally avoided. Most commentators prefer to look for signs of hope, and would finish a book such as this with suggestions for creating a better world. Bartlett, though, does not want to offer facile solutions.

Throughout the book, he notes that most people prefer not to examine the sources of human evil, and so he says that hope is actually part of the problem. By continually being hopeful and looking for happy endings, it becomes too easy to avoid looking at the diseased state of the human mind and the systems it has created.

Setting aside hope, nevertheless there are implications that can be derived from Bartlett’s analysis. Here I offer three possible messages regarding technology.

Firstly, if it makes sense to talk about human evil in a non-metaphorical sense, and to trace the origins of evil to features of human psychology, then technologies, as human creations, are necessarily implicated in evil. The implication is that a normative analysis is imperative. If evil is seen as something to be avoided or opposed, then likewise those technologies most closely embodying evil are likewise to be avoided or opposed. This implies making judgements about technologies. In technologies studies, this already occurs to some extent. However, common frameworks, such as political economy, innovation and actor-network theory, do not highlight moral evaluation.

Medical researchers do not hesitate to openly oppose disease, and in fact the overcoming of disease is an implicit foundation of research. Technology studies could more openly condemn certain technologies.

Secondly, if technology is implicated in evil, and if one of the psychological processes perpetuating evil is a lack of recognition of it and concern about it, there is a case for undertaking research that provides insights and tools for challenging the technology of evil. This has not been a theme in technology studies. Activists against torture technologies and military weaponry would be hard pressed to find useful studies or frameworks in the scholarship about technology.

One approach to the technology of evil is action research (McIntyre 2008; Touraine 1981), which involves combining learning with efforts towards social change. For example, research on the torture technology trade could involve trying various techniques to expose the trade, seeing which ones are most fruitful. This would provide insights about torture technologies not available via conventional research techniques.

Thirdly, education could usefully incorporate learning about the moral evaluation of technologies. Bartlett argues that one of the factors facilitating evil is the low moral development of most people, as revealed in the widespread complicity in or complacency about war preparation and wars, and about numerous other damaging activities.

One approach to challenging evil is to increase people’s moral capacities to recognise and act against evil. Technologies provide a convenient means to do this, because human-created objects abound in everyday life, so it can be an intriguing and informative exercise to figure out how a given object relates to killing, hatred, psychological projection and various other actions and ways of thinking involved in violence, cruelty and the destruction of the foundations of life.

No doubt there are many other ways to learn from the analysis of human evil. The most fundamental step is not to turn away but to face the possibility that there may be something deeply wrong with humans as a species, something that has made the species toxic to itself and other life forms. While it is valuable to focus on what is good about humans, to promote good it is also vital to fully grasp the size and depth of the dark side.

Acknowledgements

Thanks to Steven Bartlett, Lyn Carson, Kurtis Hagen, Kelly Moore and Steve Wright for valuable comments on drafts.

Contact details: bmartin@uow.edu.au

References

Ackroyd, Carol, Margolis, Karen, Rosenhead, Jonathan, & Shallice, Tim (1977). The technology of political control. London: Penguin.

Baron-Cohen, Simon (2011). The science of evil: On empathy and the origins of cruelty. New York: Basic Books.

Bartlett, Steven James (2005). The pathology of man: A study of human evil. Springfield, IL: Charles C. Thomas.

Bartlett, Steven James (2011). Normality does not equal mental health: the need to look elsewhere for standards of good psychological health. Santa Barbara, CA: Praeger.

Bartlett, Steven James (2013). The dilemma of abnormality. In Thomas G. Plante (Ed.), Abnormal psychology across the ages, volume 3 (pp. 1–20). Santa Barbara, CA: Praeger.

Baumeister, Roy F. (1997). Evil: Inside human violence and cruelty. New York: Freeman.

Citron, D.K. (2014). Hate crimes in cyberspace. Cambridge, MA: Harvard University Press.

Gregory, Martyn (director and producer). (1995). The torture trail [television]. UK: TVF.

Jasper, James M. (1997). The art of moral protest: Culture, biography, and creativity in social movements. Chicago: University of Chicago Press.

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.

Martin, Brian (1984). Uprooting war. London: Freedom Press.

Martin, Brian (2001). Technology for nonviolent struggle. London: War Resisters’ International.

Martin, Brian (2007). Justice ignited: The dynamics of backfire. Lanham, MD: Rowman & Littlefield.

Martin, Brian (2009). Managing outrage over genocide: case study Rwanda. Global Change, Peace & Security, 21(3), 275–290.

McIntyre, Alice (2008). Participatory action research. Thousand Oaks, CA: Sage.

McLuhan, Marshall (1964). Understanding media: The extensions of man. New York: New American Library.

Milgram, Stanley (1974). Obedience to authority. New York: Harper & Row.

Omega Foundation (2000). Crowd control technologies. Luxembourg: European Parliament.

Peck, M. Scott (1988). People of the lie: The hope for healing human evil. London: Rider.

Proctor, Robert N. (2011). Golden holocaust: Origins of the cigarette catastrophe and the case for abolition. Berkeley, CA: University of California Press.

Schmidt, Jeff (2000). Disciplined minds: A critical look at salaried professionals and the soul-battering system that shapes their lives. Lanham, MD: Rowman & Littlefield.

Touraine, Alain (1981). The voice and the eye: An analysis of social movements. Cambridge: Cambridge University Press.

Wittner, Lawrence S. (1993–2003). The struggle against the bomb, 3 volumes. Stanford, CA: Stanford University Press.

Wright, Steve (1998). An appraisal of technologies of political control. Luxembourg: European Parliament.

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “Conscientiousness and Other Problems: A Reply to Zagzabski.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 10-13.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Sr

Please refer to:

We’d first like to thank Dr. Zagzebski for engaging with our review of Epistemic Authority. We want to extend the dialogue by offering brief comments on several issues that she raised.

Conscientiousness

In our review we brought up the case of a grieving father who simply could not believe that his son had died despite conclusive evidence to the contrary. This case struck us as a problem case for Zagzebki’s account of rationality. For Zagzebski, rationality is a matter of conscientiousness, and conscientiousness is a matter of using your faculties as best you can to get to truth, where the best guide for a belief’s truth is its surviving conscientious reflection. The problem raised by the grieving father is that his belief that his son is still alive will continuously survive his conscientious reflection (since he is psychologically incapable of believing otherwise) yet it is clearly an irrational belief. In her response, Zagzebski makes the following claims,

(A) “To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.” (57)

and,

(B) “a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p.” (57)

These claims about the case lead to a dilemma. Either conscientiousness is to be understood subjectively or objectively, and either way we see some issues. First, if we understand conscientiousness subjectively, then the father seems to pass the test. We can suppose that he is doing the best he can to believe truths, but the psychological stability of this one belief causes the dissonance to be resolved in atypical ways. So, on a subjective construal of conscientiousness, he is conscientious and his belief about his son has survived conscientious reflection.

We can stipulate that the father is doing the best he can with what he has, yet his belief is irrational. Zagzebski’s (B) above seems to fit a subjective understanding of conscientiousness and leads to such a verdict. This is also how we read her in Epistemic Authority more generally. Second, if we understand conscientiousness objectively, then it follows that the father is not being conscientious. There are objectively better ways to resolve his psychic dissonance even if they are not psychologically open to him.

So, the objective understanding of conscientiousness does not give the verdict that the grieving father is rational. Zagzebski’s (A) above fits with an objective understanding of conscientiousness. The problem with the objective understanding of conscientiousness is that it is much harder to get a grasp on what it is. Doing the best you can with what you have, has a clear meaning on the subjective level and gives a nice responsibilist account of conscientiousness. However, when we abstract away from the subject’s best efforts and the subject’s faculties, how should we understand conscientiousness? Is it to believe in accordance with what an ideal epistemic agent would conscientiously believe?

To us, while the objective understanding of conscientiousness avoids the problem, it comes with new problems, chief among which is a fleshed out concept of conscientiousness, so understood. In addition, the objective construal of conscientiousness also does not appear to be suited for how Zagzebski deploys the concept in other areas of the book. For instance, regarding her treatment of peer disagreement, Zagzebski claims that each party should resolve the dissonance in a way that favors what they trust most when thinking conscientiously about the matter. The conscientiousness in play here sounds quite subjective, since rational resolution is simply a matter of sticking with what one trusts the most (even if an ideal rational agent wouldn’t be placing their trust in the same states and even when presented evidence to the contrary).

Reasons

Zagzebski distinguishes between 1st and 3rd person reasons, in part, to include things like emotions as reasons. For Zagzebski,

“1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category)” (57)

We are troubled by the way that Zagzebski employs this distinction. First, it is not clear how these two kinds of reasons are related. Does a subject have a 1st person reason for every 3rd person reason? After all, not every proposition that is logically or probabilistically connected to the truth of a proposition is part of an individuals evidence or is one of their reasons. So, are the 3rd person reasons that one possesses reasons that one has access to by way of a first-person reason? How could a 3rd person reason be a reason that I have if not by way of some subjective connection?

The relation between these two kinds of reasons deserves further development since Zagzebski puts this distinction to a great deal of work in the book. The second issue results from Zagzebski’s claim that, “1st person and 3rd person reasons do not aggregate.” (57)  If 1st and 3rd person reasons do not aggregate, then they do not combine to give a verdict as to what one has all-things-considered reason to believe. This poses a significant problem in cases where one’s 1st and 3rd person reasons point in different directions.

Zagzebski’s focus is on one’s 1st person reasons, but what then of one’s 3rd person reasons? 3rd person reasons are still reasons, yet if they do not aggregate with 1st person reasons, and 1st person reasons are determining what one should believe, it’s hard to see what work is left for 3rd person reasons. This is quite striking since these are the very reasons epistemologists have focused on for centuries.

Zagzebski’s embrace of 1st person reasons is ostensibly a movement to integrate the concepts of rationality and truth with resolutely human faculties (e.g. emotion, belief, and sense-perception) that have largely been ignored by the Western philosophical canon. Her critical attitude toward Western hyper-intellectualism and the rationalist worldview is understandable and, in certain ways, admirable. Perhaps the movement to engage emotion, belief, and sense-perception as epistemic features can be preserved, but only in the broader context of an evidence-centered epistemology. Further research should channel this movement toward an examination of how non-traditional epistemic faculties as 1st person reasons may be mapped to 3rd person reasons in a way is cognizant of self-trust in personal experience —that is, an account of aggregation that is grounded fundamentally in evidence.

Biases

In the final part of her response, Zagzebski claims that the insight regarding prejudice within communities can bolster several of her points. She refers specifically to her argument that epistemic self-trust commits us to epistemic trust in others (and its expansion to communities), as well as her argument about communal epistemic egoism and the Rational Recognition Principle. She emphasizes the importance of communities to regard others as trustworthy and rational, which would lead to the recognition of biases within them—something that would not happen if communities relied on epistemic egoism.

However, biases have staying power beyond egoism. Even those who are interested in widening and deepening their perspective though engaging with others can nevertheless have deep biases that affect how they integrate this information. Although Zagzebski may be right in emphasizing the importance of communities to act in this way, it seems too idealistic to imply that such honest engagement would result in the recognition and correction of biases. While such engagement might highlight important disagreements, Zagzebski’s analysis of disagreement, where it is rational to stick with what you trust most, will far too often be an open invitation to maintain (if not reinforce) one’s own biases and prejudice.

It is also important to note that the worry concerning biases and prejudice cannot be resolved by emphasizing a move to communities given that communities are subject to the same biases and prejudices as individuals that compose them. Individuals, in trusting their own communities, will only reinforce the biases and prejudice of its members. So, this move can make things worse, even if sometimes it can make things better. Zagzebski’s expansion of self-trust to communities and her Rational Recognition Principle commits communities only to recognize others as (prima facie) trustworthy and rational by means of recognizing their own epistemic faculties in those others.

However, doing this does not do much in terms of the disclosure of biases given that communities are not committed to trust the beliefs of those they recognize as rational and trustworthy. Under Zagzebski’s view, it is possible for a community to recognize another as rational and trustworthy, without necessarily trusting their beliefs—all without the need to succumb to communal epistemic egoism. Communities are, then, able to treat disagreement in a way that resolves dissonance for them.

That is, by trusting their beliefs more than those of the other communities. This is so even when recognizing them as rational and trustworthy as themselves because, under Zagzebski’s view communities are justified in maintaining their beliefs over those of others not because of egoistic reasons but because by withstanding conscientious self-reflection, they trust their beliefs more than those of others. Resolving dissonance from disagreement in this way is clearly more detrimental than it is beneficial, especially in the cases of biased individuals and communities, for which this would lead them to keep their biases.

Although, as Zagzebski claims, attention to cases of prejudice within communities may help give more importance to her argument about the extension of self-trust to the communal level, it does not do much in terms of disclosing biases inasmuch as dissonance from disagreement is resolved in the way she proposes. Her proposal leads not to the disclosure of biases as she implies, but to their reinforcement given that biases—although plausibly unaware—is what communities and individuals would trust more in these cases.

Contact details: jonathan.matheson@gmail.com

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 56-59.

Author Information: Neil Levy, Macquarie University, neil.nl.levy@gmail.com

Levy, Neil. “The Bad News About Fake News.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 20-36.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3GV

Image credit: Paul Townsend, via flickr

Abstract

We are surrounded by sources of information of dubious reliability, and very many people consume information from these sources. This paper examines the impacts on our beliefs of these reports. I will argue that fake news is more pernicious than most of us realise, leaving long lasting traces on our beliefs and our behavior even when we consume it know it is fake or when the information it contains is corrected. These effects are difficult to correct. We therefore ought to avoid fake or dubious news and work to eliminate it.

We consume a great deal of fiction. We seek it out for entertainment and we are plunged into it inadvertently. While the dangers of fiction have been a subject of philosophical controversy since Plato, the contemporary environment raises new worries, and also provides news ways of inquiring into them. In this paper, I focus on a subset of fictions: that subset that has come to be known as fake news. Fake news is widely held to have played a surprisingly large role in recent political events and appears to be proliferating unchecked. Its scrutiny is among the most urgent problems confronting contemporary epistemology.

Fake news is the presentation of false claims that purport to be about the world in a format and with a content that resembles the format and content of legitimate media organisations.[1] Fake news is produced and reproduced by a range of organisations. Some of them manufacture fake news deliberately, to entertain, to seek to influence events or to make money through the provision of click bait (Allcot & Gentzkow 2017). Some outlets serve as conduits for fake news due to deliberately permissive filters for items that support their world view, operating a de facto “print first, ask questions later” policy (the UK Daily Mail might be regarded as an instance of such a source; see Kharpal 2017). Genuinely reputable news organizations often reproduce fake news: sometimes because they are taken in by it (for one example at random, see Irvine 2017), but more often deliberately, either to debunk it or because politicians who they cannot ignore retail it.

Fake news raises a number of obvious concerns. Democracies require informed voters if they are to function well. Government policy can be an effective means of pursuing social goals only if those who frame it have accurate conceptions of the relevant variables. As individuals, we want our beliefs to be reflect the way the world is, for instrumental reasons and for intrinsic reasons. Fake news can lead to a worse informed populace and take in those in positions of power, thereby threatening a range of things we value. It might have genuinely disastrous consequences. However, while the threat from fake news is serious, many believe that it arises only in limited circumstances. It is only to the extent to which people are naïve consumers of fake news (failing to recognize it for what it is) that it is a problem. Careful consumption and fact checking can eliminate the problem for responsible individuals.[2]

In fact people often knowingly consume fake news. Some consume it in order to know what the credulous believe. Others confess to consuming fake news for entertainment. Most centrally, in recent months, fake news has been unavoidable to those who attempt to keep up with the news at all, because it has stemmed from the office of the most powerful man in the world. Journalists have seen it as their duty to report this fake news (often, but not always, as fake), and many people believe that they have a duty to read this reporting. Fact checks, for instance, repeat fake news, if only to debunk it.

According to what I will call the naïve view of belief and its role in behavior, fake news is a problem when and to the extent to which it is mistaken for an accurate depiction of reality, where the measure of such a mistake is sincere report. On the naïve view, we avoid the mistake by knowing consumption of fake news, and by correction if we are taken in. The naïve view entails that careful consumption of fake news, together with assiduous fact checking, avoids any problems. It entails, inter alia, that reading the fact check is at worst an innocuous way of consuming fake news.

The naïve view seems common sense. Moreover, advocates can point to extensive psychological research indicating that in most contexts even young children have little difficulty in distinguishing fact from fantasy (Weisberg 2013). Fiction, it seems, poses no problems when it is appropriately labelled as such; nor should fake news. I will argue that the naïve view is false. Worries about fake news may indeed be more serious when it is consumed by those who mistake it for genuine, but more sophisticated consumers are also at risk. Moreover, fake news corrected by fact checking sites is not fake news disarmed; it continues to have pernicious effects, I will suggest.

Some of these effects have received a great deal of attention in the psychological literature, if not the philosophical literature, though not in the context of fake news specifically. There is a great deal of evidence that people sometimes acquire beliefs about the world outside the story from fictions in a way that directly reflects the content of the claims made in the fiction,[3] and there is a great deal of evidence that people are surprisingly unresponsive to corrections of false claims once they come to accept them. To a large extent, I simply review this evidence here and show how it applies in the context of fake news. In addition, though, I will argue for a claim that has not previously been defended: consuming fake news shapes our further beliefs and our behavior even in those (many) cases in which we do not acquire false beliefs directly from the fiction. The representations we acquire from fake news play some of the same roles in subsequent cognition that false beliefs would play.

I will not argue that the costs arising from the consumption of fakes news outweigh the benefits. The claim that the media should report fake news when it is retailed by central figures on the political landscape is a compelling one, and I do not aim to rebut it. However, showing that the knowing consumption of fake news is itself a serious problem is a significant enough goal to justify a paper. If I am right that the costs of consumption are far from trivial, that should serve as an impetus for us to formulate proposals to minimize those costs.

Against the Naïve View

The naïve view assumes that mental representations are reliably and enduringly categorized into kinds: beliefs, desires, fantasies and fictions, and that we automatically or easily reclassify them given sufficient reason to do so. On this picture, fake news is a problem when it results in representations that are categorized as beliefs. That problem is averted by ensuring that the representations we form as we consume fake news are not wrongly categorized. We will then not access them when we self-ascribe beliefs and they will not guide our behavior in the manner characteristic of beliefs. Sometimes, of course, we make a mistake and are misled, and a false claim comes to be categorized as a belief. But the problem may be solved by a retraction. All going well, encountering good evidence that a claim is false results in its reclassification.

This naïve view is false, however. The available evidence suggests that mental representations are not reliably and enduringly stored into exclusive categories. Instead, the self-ascription of beliefs is sensitive to a range of cues, internal and external, in ways that can transform an internal state from a fantasy into a belief.

Minded animals continually form representational states: representations of the world around them and (in many cases) of internally generated states (Cheney & Seyfarth 2007; Camp 2009). These representations include beliefs or belief-like states, desires, and, in the human case at least, imaginings (which are presumably generated because it is adaptive to be able to simulate counterfactuals). These representations have certain causal powers in virtue of the kind of states they are; beliefs, for instance, are apt to be used as premises in reasoning and in systematic inference (Stich 1978; AU 2015). These representations include many subpersonal states, to which the language of commonsense psychology apply only uneasily if at all. For ease of reference, I will call these states ground level representations.

When we ascribe states to ourselves, these representations powerfully shape the kind and content of the attitude ascribed. It remains controversial how exactly this occurs, but there is widespread agreement that cues—Like questions probing what we believe—cause the activation of semantically related and associatively linked representations, which guide response (Collins & Loftus 1975; Buckner 2011). Perhaps we recall a previous conversation about this topic, and our own conclusion (or verbal expression of the conclusion). Perhaps we have never thought about the topic before, but our ground level representations entail a response. The person may generate that response effortfully, by seeing what their representations entail, or automatically.

Belief self-ascription is powerfully shaped by ground-level representations, in ways that make it highly reliable much of the time. Beliefs entailed by these representations, or generated by recalling past acts of endorsement, are likely to be very stable across time: asked what she believes about a topic at t or at t1, for any arbitrary values of t and t1, the person is likely to ascribe the same belief (of course, if the person is asked at t and t1, she is even more likely to ascribe the same belief because she may recall the earlier episode). But often the representations underdetermine how we self-ascribe. In those circumstances, the belief may be unstable; we might self-ascribe p were we asked at t but ~p were we asked at t1. When ground-level representations underdetermine beliefs, we come to ascribe them by reference to other cues, internal and external.

Consider cognitive dissonance experiments; for example, the classic essay writing paradigm. Participants are assigned to one of two groups. One group is paid to write an essay defending a claim that we have good reason to think is counter-attitudinal (college students may be paid to defend the claim that their tuition fees should rise, for instance), while the other group is asked to defend the same claim. (Participants in this arm may be paid a small amount of money as well, but compliance is secured by mild situational pressure; essentially appealing to their better nature. It is essential to the success of the manipulation that participants in this arm see themselves as participating voluntarily). The oft-replicated finding is that this paradigm affects self-ascribed beliefs in those who defended the thesis under mild situational pressure, but not those paid to write the essay (see Cooper 2007 for review). That is, the former, but not the latter, are significantly more likely to assert agreement with the claim they defended in the essay than matched controls.

These data are best explained by the hypothesis that belief self-ascription is sensitive to cues about our own behavior (Bem 1967; Carruthers 2011). Participants in the mild pressure arm of the experiment are unable to explain their own behavior to themselves (since they take themselves to have voluntarily defended the view) except by supposing that they wanted to write the essay, and that, in turn, is evidence that they believe the claim defended. Participants in the other arm can instead explain their behavior to themselves by reference to the payment they received. In this case, external cues swamp the evidence provided by ground level representations: college students can be expected to have ground-level representations that imply the belief that their tuition should not rise (indeed, control participants overwhelmingly profess that belief).

Choice blindness experiments (Johansson et al. 2005; Hall, Johansson and Strandberg 2012) provide further evidence that we self-ascribe mental states using evidence provided by our own behavior, together with the ground-level representations. In these paradigms, participants are asked to choose between options, with the options represented by cards. The card selected is then placed in a pile along with all the others chosen by that participant. In the next phase of the experiment, the cards are shown to the participants and they are asked why they chose the options they did. Using sleight of hand, however, the experimenters substitute some unchosen options for chosen ones. On most trials, the participants fail to detect the substitutions and proceed to justify their (apparent) choice. Choice blindness has been demonstrated even with regard to real policy choices in a forthcoming election, and even among the respondents who identified themselves as the most committed on the issues (Hall et al. 2013). While these respondents were more likely to detect the substitution, around one third of them defended policies they had in fact rejected.

Again, a plausible explanation of these data is that respondents self-ascribed belief via interpretation. The card they were presented with was drawn from the pile that represented their choices, they believed, so it was evidence that they actually agreed the policy they had were now asked to justify. Of course, the card was not their only evidence that they agreed with the policy. They also had internal evidence; recall of previous discussions about the policy or related issues, of previous experiences related to the policy, of principles to which they take themselves to be committed, and so on. Because they have these other sources of evidence, the manipulation was not effective in all cases. In some cases, individuals had strong evidence that they disagreed with the policy, sufficient to override the external evidence. But in some cases the ground-level representations underdetermined belief ascription (despite their taking themselves to be strongly committed to their view) and the external cue was decisive.

The large literature on processing fluency provides yet more evidence against the naïve view. Processing fluency refers to the subjective ease of information processing. Psychologists typically understand processing fluency as an experiential property: a claim is processed fluently when processing is subjectively easy (Oppenheimer 2008). It may be that fluency is better understood as the absence of an experiential property: that is, a claim is processed fluently just in case there is no experience of disfluency. Disfluency is a metacognitive signal that a claim is questionable and prompts more intensive processing of the claim (Alter, Oppenheimer, Epley & Eyre 2007; Thompson; Prowse Turner & Pennycook 2011). When the claim is processed fluently, on the other hand, we tend to accept it (Reber & Schwarz 1999; Schwartz, Newman & Leach, in press). When a claim is processed fluently, it is intuitive, and the strong default is to accept intuitive claims as true: we self-ascribe belief in claims that are intuitive for us.

(Dis)fluency may be induced by a variety of factors. The content of the claim plays a significant role in the production of disfluency: if the claim is inconsistent with other things that the agent believes and which she is likely to recall at the time (with claim content as a cue for recall), then she is likely to experience disfluency. Thus, the content of ground-level representations and their entailments help to shape fluency. But inconsistency is just one factor influencing fluency, because processing may be more or less difficult for many reasons, some of them independent of claim content. For instance, even the font in which a claim is presented influences processing ease: those presented in legible, high-contrast, fonts are more likely to be accepted than those presented in less legible fonts, even when the content of the claim is inconsistent with the person’s background knowledge (Song & Schwarz 2008).

The effects of disfluency on belief ascription may be significant. Consider the influence of retrieval effort on claim acceptance. Schwartz et al. (1991) asked participants to recall either 6 or 12 times on which they had acted assertively. Participants who recalled 12 occasions rated themselves as less assertive than those who recalled 6 instances; presumably the difficulty of recalling 12 occasions was implicitly taken as evidence that such occasions were few and far between, and trumped the greater amount of evidence of assertive behavior available. How these cues are interpreted is modulated by background beliefs. For instance, telling experimental participants that effortfulness of thought is an indicator of its complexity, and therefore of the intelligence of the person who experiences it, may temporarily reverse the disposition to take the experience of effortfulness as a cue to the falsity of a claim (Briñol, Petty & Tormala 2006).

A final example: evidence that a view is held by people with whom they identify may powerfully influence the extent to which participants agree with it. The effect may be sufficiently powerful to overwhelm strong ground-level representations. Maoz et al. (2002) found that attitudes to a peace proposal among their Israeli sample were strongly influenced by information about who had formulated it. Israeli Arabs were more likely to support the proposal if it was presented as stemming from Palestinian negotiators than from the Israeli sides, while Israeli Jews were more likely to support it if it was presented as stemming from the Israeli side. Cohen (2003) found that attitudes to welfare policies were more strongly influenced by whether they were presented as supported by House Democrats or House Republicans than by policy content, with Democrats (for example) supportive of quite harsh policies when they were presented as stemming from the side they identified with.

These data are probably explained by a similar mechanism to the choice blindness data. Whereas in the latter people ascribe a belief to themselves on the basis of evidence that they had chosen it, in these experiments they ascribe a belief to themselves on the basis of evidence that people (that they take to be) like them accept it. The effect is powerful enough to override content-based disfluency that may have arisen from consideration of the details of the policies under consideration. It may be that a mechanism of this kind helps to explain why his supporters are not bothered by some of Donald Trump’s views we might have expected them to find troublesome. Until recently, Russia was regarded as extremely hostile to the United States by most conservative Americans, but Trump’s wish for a friendly relationship has softened their views on the issue.

All this evidence (which is only a subset of the total evidence that might be cited) powerfully indicates that belief ascription does not work the naïve view suggests. That, in turn, indicates that representations are not (always) stored neatly, such that they can be compartmentalized from one another: they are not stored reliably and enduringly into kinds. Ground-level representations often underdetermine the beliefs we come to hold. Even when they might reasonably be expected to strongly imply a belief (that my tuition fees should not rise; that our welfare policies should be supportive and not harsh, and so on), contextual cues may swamp them. Even previous endorsement of a claim may not insulate it from revision. Using the classic essay writing paradigm, Bem & McConnell (1970) showed that explicitly asking participants about the topic a week beforehand, and recording their responses in a manner that linked responses to individuals, did not prevent belief revision. Participants denied that their beliefs had changed at all.

All this evidence (and a great deal more) indicates that mental states are not exhaustively and exclusively categorized into kinds, such that we can reliably self-attribute them via self-scanning. While there is no doubt that we self-ascribe beliefs in ways that are pervasively and powerfully shaped by the properties of our ground-level representations, these representations are often leave a great deal of leeway for self-ascription. Ground level representations may come to play all kinds of different roles in our cognition and behavior, regardless of how they were acquired.

That, in turn, suggests that the consumption of fiction may lead to the formation of representations that subsequently come to be accepted by the person whose representation they are, even when they did not take the source to be factual. That prediction is, in fact, a retrodiction: there is already good evidence that people come to believe claims made in texts they recognize as fictions.

Breaking Through the Fourth Wall

Let ‘fiction’ refer to two categories of sources of false information. One category is made up of information sources that are either explicitly presented as false (novels, The Onion and so on) and sources that are taken by consumers to be false. The latter conjunct is subject-relative, since one person may read The National Inquirer believing it is accurate while another may read it for entertainment value despite believing it to be false. The second category is information consumed as true, but which is subsequently corrected. Both kinds of fiction have effects on agents’ mental states that cannot be accounted for on the naïve view.

A great deal of the information we acquire about the world beyond our direct experience we acquire from fiction. In many cases, such acquisition is unproblematic. Someone may know, for instance, that New York has a subway system solely on the basis of having watched films set in the city. Since fictions usually alter real world settings only when doing so is germane to their plots, the inference from film to real world is very often reliable. We may also acquire beliefs about human psychology from fictions in a way that is unproblematic (Friend 2006). However, we come to acquire beliefs from sources we take to be fictional in a way that we wouldn’t, and shouldn’t, endorse on reflection.

The relevant experiments have typically proceeded as follows. In the experimental conditions, participants read a version of a fictional story in which assertions are made about the world outside the story. The stories differ in the truth of these statements, so that some participants get a version in which a character states, for example, that mental illness is contagious while others get a version in which they state that mental illness is not contagious (control subjects, meanwhile, read a story in which no claims about the target propositions are made). After a filler task, participants are given a general knowledge quiz, in which they are asked about the target propositions (e.g., is mental illness contagious?) The participants who read a version containing the false assertion are significantly more likely to assert it than those who read a version containing the true assertion or who read the control version (this description is based on Prentice, Gerrig & Bailis 1997; Wheeler, Green & Brock 1999 report a replication). Other studies produced the same results using a slightly different methodology; rather than having the true or false propositions asserted, they are mentioned as peripheral narrative details (e.g. Marsh & Fazio 2006). Again, participants are significantly more likely to accept claims presented in the fiction as true in the real world.

More troublingly still, we may be more inclined to accept claims made in a fiction than identical claims made in a passage presented as factual (Prentice & Gerrig 1999; Strange 2002). Moreover, factors known to reduce acceptance of claims presented as factual do not significantly reduce reliance on claims presented as fictional. Need for cognition, the personality trait of being disposed to engage in effortful thought, is protective against false information in other contexts, but not in the fictional context (Strange 2002). Even when participants are warned that the stories may contain false information (Marsh & Fazio 2006) or when stories are presented slowly to allow for intensive processing (Fazio & Marsh 2008), acceptance of false claims does not decrease.

We are much less likely to acquire false information from fantastic fiction (Rapp et al. 2014), probably because its claims are not easily integrated with our existing model of the world. But when fictions are consistent with what we know of the world, false beliefs are often acquired (of course fake news is designed to be compatible with what we know about the real world: It concerns real people, often acting in line with their real motivations and in ways that are generally possible). Worse, when false beliefs are acquired people may forget their source: information acquired from fiction is sometimes subsequently misattributed to reliable sources (Marsh, Cantor & Brashier 2016), or held to be common knowledge. This may occur even when the claim is in fact inconsistent with common knowledge (Rapp 2016).

We are therefore at risk of acquiring false beliefs from fiction; when those fictions are fake news, the beliefs we acquire may be pernicious. However acquired, these beliefs may prove resistant to correction. In fact, corrections rarely if ever eliminate reliance on misinformation. Sometimes agents rely on the misinformation subsequent to correction because they reject the correction. Sometimes they accept the correction and yet continue to act on the corrected belief. I begin with the former kind of case.

The phenomenon of belief perseverance has long between known to psychologists. Classical demonstrations of belief perseverance involve giving people feedback on well they are doing at a task, leading them to form a belief about their abilities. They are subsequently informed that the feedback was scripted and did not track their actual performance. This information undercuts their evidence for their belief but does not lead to its rejection: participants continue to think that they are better than average at the task when they have been assigned to the positive feedback condition (Ross, Lepper & Hubbard 1975). Wegner, Coulton, & Wenzlaff (1985) demonstrated that telling people beforehand that the feedback would be unrelated to their actual performance—i.e., fictitious—did not prevent it from leading to beliefs that reflected it contents.

Research using different paradigms has demonstrated that even when people remember a retraction, they may continue to cite the retracted claim in explaining events (Fein, McCloskey, & Tomlinson 1997; Ecker, Lewandowsky, Swire, & Chang 2011). In fact, corrections sometimes backfire, leaving agents more committed to false claims than before. The most famous demonstration of the backfire effect is Nyhan and Reifer (2010; see Schwartz et al. 2007 for an earlier demonstration of how the attempt to debunk may increase belief in the false claim). They gave participants mock news articles, which contained (genuine) comments from President Bush implying that Iraq had an active weapons of mass destruction program at the time of the US invasion. In one condition, the article contained an authoritative correction, from the (also genuine) congressional inquiry into Iraqi WMDs held subsequent to the invasion. Participants were then asked to indicate their level of agreement with the claims that Iraq had stockpiles of WMDs and an active WMD development program at the time of the invasion. For conservative participants, the correction backfired: they expressed higher levels of disagreement with the claim than conservative peers whose false belief was not corrected. Since Nyhan and Reifler’s initial demonstration of the backfire effect, these results have been replicated multiple times (see Peter & Koch 2016 for review).[4]

Even when a correction succeeds in changing people’s professed beliefs, they may exhibit a behavioural backfire. Nyhan, Reifler, Richey & Freed (2014) found that correcting the myth that vaccines cause autism was effective at the level of belief, but actually decreased intention to have one’s children vaccinated among parents who were initially least favourable to vaccines. Nyhan and Reifler (2015) documented the same phenomenon with regard to influenza vaccines. Continued reliance on information despite explicit acknowledgement that it is false is likely to be strongest with regard to emotionally arousing claims, especially those that are negatively valenced (e.g., arousing fear or disgust). There is extensive evidence that children’s behavior is influenced by pretence. In the well-known box paradigm, children are asked to imagine that there is a fearsome creature in one box and a puppy in another. Young children are quick to acknowledge that the creatures are imaginary, but prefer to approach the latter box than the former (Harris et al. 1991; Johnson and Harris 1994). They may exhibit similar behavior even when the box is transparent and they can see it is empty (Bourchier and Davis 2000; see Weisberg 2013 for discussion of the limitations of this research). Emotionally arousing claims are also those that are most likely to be transmitted (Peters, Kashima & Clark 2009). Of course, fake news is often emotionally arousing in just these ways. Such news can be expected to proliferate and to affect behavior.

Despite our knowing that we are consuming fiction, its content may affect our beliefs in ways that cannot be accounted for by the naïve view. Perhaps worse, these contents may continue to influence our beliefs and (somewhat independently) our behavior if and when they are retracted. This evidence indicates that when we acquire ground level representations from fiction, recognizing that the source is fictional and exposure to fact checking may not prevent us from acquiring false beliefs that directly reflect its contents, or from having our behavior influenced by its contents. Even for sophisticated consumers, the consumption of fiction may be risky. This is especially so for fake news, given that it has features that make fictional transfer more likely. In particular, fake news is realistic, inasmuch as it portrays real people, acting in line with their genuine motivations in circumstances that closely resemble the real world and it is emotionally arousing, making it more memorable and more likely to be transmitted and repeated. If it is in addition absorbing, we are especially likely to acquire false beliefs from it.

How Fake News Parasitizes Belief and Behavior

When we consume information, we represent the events described to ourselves. These representations might be usefully thought of as ways a possible world might be. Once these representations are formed, they may persist. In fact, though we may forget such information rapidly, some of these representations are very long-lasting and survive retraction: coming to accept inconsistent information does not lead to older representations being overwritten. These representations persist, continuing to shape the beliefs we ascribe to ourselves, the ways in which we process further information, and our behavior.

As we saw above, we acquire beliefs that directly reflect the content of the fictions we consume. We may therefore expect to acquire beliefs from that subset of fiction that is fake news. One way this may occur is through memory-based mechanisms. Sophisticated readers may be especially wary of any claim that they recall came from a fake news site, but source knowledge and object knowledge are stored separately and may dissociate; readers may fail to recall the source of the claim when its content comes to mind (Pratkanis et al. 1988; Lewandowsky et al. 2012). Worse, they may misattribute the claim to a reliable source or even to common knowledge (Marsh, Cantor & Brashier 2016; Rapp 2016). These effects are particularly likely with regard to details of the fake news story that are apparently peripheral to the story, about which the exercise of vigilance is harder and likely less effective. If the person does come to ascribe the belief to themselves, they will then have further evidence for future self-ascriptions: that very act of self-ascription. The belief will now resist disconfirmation.

We may also acquire beliefs from fiction through fluency effects. Repetition of a claim powerfully affects fluency of processing (Begg, Anas & Farinacci 1992; Weaver et al. 2007). This effect may lead to the agent accepting the original claim, when she has forgotten its source. Even when repetition is explicitly in the service of debunking a claim, it may result in higher levels of acceptance by promoting processing fluency (Schwartz et al. 2007). The influence of repetition may persist for months (Brown & Nix 1996), increasing the probability that the source of a repeated claim may be forgotten. All these effects may lead to even careful consumers coming to accept claims that originate in fake news sites, despite a lack of evidence in their favour. Because the claim will be misattributed to common knowledge or a reliable source, introspection cannot reveal the belief’s origins.

There are steps we can take to decrease the likelihood of our acquisition of false claims from fiction, which may form the basis of techniques for decreasing transfer from fake news too. Online monitoring of information, in order to tag it as false as soon as it is encountered, reduces acquisition of false information (Marsh & Fazio 2006). While these steps likely would improve somewhat effective, there are reasons to think that nevertheless a significant problem would persist even with their adoption. First, in near optimal conditions for the avoidance of error, Marsh and Fazio found that the manipulation reduced, rather than eliminated, the acquisition of false claims from fiction. Second, the measures taken are extremely demanding of time and resources. Marsh and Fazio required their participants to make judgments about every sentence one by one, before the next sentence was displayed. More naturalistic reading is likely to produce the kind of immersion that is known to dispose to the acquisition of false claims from fiction (Green & Brock 2000; Lewandowsky et al. 2012). Third, Marsh and Fazio measured the extent of acquisition of false claims from fiction soon after the fiction was read and the error tagged, thereby greatly reducing the opportunity for dissociations in recall between the claim content and the discounting cue. We should expect a sleeper effect, with an increase of acquisition over time. Finally, Marsh and Fazio’s design can be expected to have little effect on the fluency with which the claims made were processed. As we have seen, repetition increases fluency. But many of the claims made in fake news are encountered multiple times, thereby increasing processing fluency and promoting an illusion of truth.

On the other hand, many sophisticated consumers of fake news come to it with fiercely partisan attitudes toward the claims made. They expect to encounter not merely false claims, but glaringly and perniciously false claims. It is reasonable to expect this attitude to be protective.[5] Moreover, it is should be obvious that we routinely encounter fake news or egregiously false claims without coming to believe them. When we think of such claims (about the Bowling Green attack, for instance), we think of false claims we recognize as false.  Confidence that we can consume fake news without acquiring false beliefs from it should be tempered by recognition of the impossibility of identifying candidate beliefs, since we are unable to identify false claims we take to be true and we are likely to misattribute claims we do acquire. Nevertheless, there is no doubt that we routinely succeed in rejecting the claims we read on such sites. But that doesn’t entail that these claims don’t have pernicious effects on our cognition and subsequent behavior.

There is good reason to believe that even when we succeed in rejecting the claims that we encounter in fake news, those claims will play a role in our subsequent belief acquisition in ways that reflect their content. Even when they are not accepted, claims are available to shape beliefs in a similar (and for some purposes identical) kind of way as those that the person accepts. As noted above, successfully retracted claims are not overwritten and their continuing influence on cognitive processing has been demonstrated. O’Brien, Cook & Guéraud (2010) found that information inconsistent with retracted claims was processed more slowly than other information, indicating that it continues to play an active role in how the text is comprehended, despite the fact that the readers fully accepted the retraction. Representations like these may shape how related information is processed, even (perhaps especially) when it is not explicitly recalled. There are at least three pathways whereby this may occur: one fluency-based, one via the activation of related information, and one through the elicitation of action tendencies.

First, the fluency-based mechanism: An agent who succeeds in recalling that the claim that Hillary Clinton is a criminal stems from a fake news site and therefore does not self-ascribe belief in the claim may nevertheless process claims like Hillary Clinton is concerned only with her own self-interest more fluently, because the semantic content of the first representation makes the second seem more familiar and therefore more plausible. The more familiar we are with a false claim, even one we confidently identify as false, the more available it is to influence processing of semantically related claims and thereby fluency. Independent of fluency, moreover, the activation of semantically or associatively related information plays a characteristic role in cognitive processing. Representations prime other representations, and that biases cognition. It influences what else comes to mind and therefore what claims come to be weighed in deliberation (negative false claims about Clinton may preferentially prime the recall of negative true claims about her—say, that she voted in favor of the war in Iraq—and thereby to influence deliberation about her). Without the false prime, the person may have engaged in more even-handed deliberation. Perhaps priming with fake news might result in her deciding to abstain from voting, rather than support ‘the lesser evil’. Sufficiently prolonged or repeated exposure to fake news about a person might result in the formation of implicit biases against her, in the same way in which, plausibly, implicit biases against women or minorities arise, at least in part, from their negative portrayal in explicitly labelled fictions (Kang 2012).

While it is unclear whether the mechanism is fluency-based or content-based, there is experimental evidence that suggests that claims known from the start to be false play a role in information processing. For instance, Gilbert, Tafarodi, and Malone (1993) had participants read crime reports, which contained some information marked (by font color) as false. In one condition, the false information was extenuating; in the other, it was exacerbating. Participants who were under cognitive load or time pressure when reading the information judged that the criminal should get a longer sentence when the false information was exacerbating and a shorter sentence when the false information was extenuating. At longer delays, it is likely that those who were not under load would be influenced by the information, even if they continued to recognize it as false. Its availability would render related information accessible and more fluently processed, or activate it so that it played its characteristic role in processing, affecting downstream judgments.

Fictions also elicit action tendencies. As we saw above, scenarios that children recognize to be imaginary affect how they behave. They are, for instance, reluctant to approach a box in which they had imagined there was a monster, despite being confident that it was only make-believe (Harris et al. 1991; Johnson and Harris 1994), and even when they can see for themselves that the box is empty (Bourchier and Davis 2000). There is no reason to think that the kinds of effects are limited to children. Many people in fact seek out fiction at least partly in order to experience strong emotions with associated action tendencies. We might go to the cinema to be moved, to be scared, to be exhilarated, all by events we know to be fictional; these emotions dispose us, at least weakly, to respond appropriately. We may cry, flinch away, even avert our gaze, and these action tendencies may persist for some time after the film’s end.[6]

The offline stimulation of mechanisms for simulation and the elicitation of action tendencies is pleasurable and may even be adaptive in highly social beings like us. It is also risky. When we simulate scenarios we know (or should know) to be false, we elicit action tendencies in ourselves that may be pernicious. Fake news might, for instance, retail narratives of minorities committing assaults. We may reject the content of these claims, but nevertheless prime ourselves to respond fearfully to members of the minority group. Repeated exposure may result in the formation of implicit biases, which are themselves ground-level representations. These representations, short or long term, play a distinctive role in cognition too, influencing decision-making.

Conclusion

In this paper, I have argued that fake news poses dangers for even its sophisticated consumers. It may lead to the acquisition of beliefs about the world that directly reflect its content. When this happens, we may misattribute the belief to a reputable source, or to common knowledge. Beliefs, once acquired, resist retraction. We do better to avoid acquiring them in the first place.

I have conceded that we routinely succeed in rejecting claims made by those who purvey fake news. That may suggest that the threat is small. Perhaps the threat of belief acquisition is small; I know of no data that gives an indication of how often we acquire such beliefs or how consequential such beliefs are, and introspection is an unreliable guide to the question. I have also argued, however, that even when we succeed in consuming fake news without coming to acquire beliefs that directly reflect its content (surely the typical case), the ground level representations will play a content-reflecting role in our further cognition, in ways that may be pernicious. Cognitive sophistication may not be protective against fake news. Need for cognition (a trait on which academics score very highly) is not protective against the acquisition of beliefs from fiction (Strange 2002). There is also evidence that higher levels of education and of reflectiveness may correlate with higher levels of credulousness about claims that agents want to believe. For example, higher levels of education among Republicans are associated with higher levels of belief that Obama is a Muslim, not lower (Lewandowsy et al. 2012), and with higher degrees of scepticism toward climate change (Kahan 2015). This may arise from what Taber & Lodge (2006) call the sophistication effect, whereby being more knowledgeable provides more ammunition with which to counter unpalatable claims.

I have not argued that the dangers of fake news outweigh the benefits that may arise from reading it. Perhaps these benefits are sufficient such that its consumption is all things considered justifiable. This paper is a first step toward assessing that claim. There is a great deal more we need to know to assess it. For instance, we have little data concerning the extent to which the partisan attitude of those people who consume fake news in order to discover just how it is false may be protective. Showing that the dangers are unexpectedly large is showing that gathering that data, as well as assessing the benefits of the consumption of fake news, is an unexpectedly urgent task.[7]

References

Allcot, H. & Gentzkow, M. 2017. “Social Media and Fake News in the 2016 Election.” NBER Working Paper No. 23089. National Bureau of Economic Research. http://www.nber.org/papers/w23089.

Alter, A.L., Oppenheimer, D.M., Epley, N. & Eyre, R.N. 2007. “Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning.” Journal of Experimental Psychology: General 136: 569-576.

Begg, I. M., Anas, A. & Farinacci, S. 1992. “Dissociation of Processes in Belief: Source Recollection, Statement Familiarity, and the Illusion of truth.” Journal of Experimental Psychology: General 121: 446-458.

Bem, D. J. 1967. “Self-Perception: An Alternative Interpretation of Cognitive Dissonance Phenomena.” Psychological Review 74: 183-200.

Bem, D. J., & McConnell, H. K. 1970. “Testing the Self-Perception Explanation of Dissonance Phenomena: On the Salience of Premanipulation Attitudes.” Journal of Personality and Social Psychology 14: 23-31.

Bourchier, A., & Davis, A. 2000. “Individual and Developmental Differences in Children’s Understanding of the Fantasy-Reality Distinction.” British Journal of Developmental Psychology 18: 353–368.

Briñol, P., Petty, R.E., & Tormala, Z.L. 2006. “The Malleable Meaning of Subjective Ease.” Psychological Science 17: 200-206.

Brown, A. S., & Nix, L. A. 1996. “Turning Lies into Truths: Referential Validation of Falsehoods.” Journal of Experimental Psychology: Learning, Memory, and Cognition 22: 1088-1100.

Buckner, C. 2011. “Two Approaches to the Distinction Between Cognition and ‘Mere Association’.” International Journal of Comparative Psychology 24: 314-348.

Camp, E. 2009. “A Language of Baboon Thought?” In Robert Lurz (ed.) Philosophy of Animal Minds, 108-127. New York: Cambridge University Press.

Carruthers, P. 2011. The Opacity of Mind. Oxford: Oxford University Press.

Cheney, D.L. & Seyfarth, R.M. 2007. Baboon Metaphysics: The Evolution of a Social Mind. Chicago: University of Chicago Press.

Cohen, G.L. 2003. “Party Over Policy: The Dominating Impact of Group Influence on Political Beliefs.” Journal of Personality and Social Psychology 85: 808-822.

Cooper J. 2007. Cognitive Dissonance: Fifty Years of a Classic Theory. Los Angeles: Sage Publications.

Ecker, U. K. H., Lewandowsky, S., Swire, B., & Chang, D. 2011. “Correcting False Information in Memory: Manipulating the Strength of Misinformation Encoding and its Retraction.” Psychonomic Bulletin & Review 18: 570–578.

Eslick, A. N., Fazio, L. K., & Marsh, E. J. 2011. “Ironic Effects of Drawing Attention to Story Errors.” Memory 19: 184–191.

Fazio, L. K., & Marsh, E. J 2008. “Slowing Presentation Speed Increases Illusions of Knowledge.” Psychonomic Bulletin and Review 15: 180-185.

Fein, S., McCloskey, A. L., & Tomlinson, T. M. 1997. “Can the Jury Disregard That Information? The Use of Suspicion to Reduce the Prejudicial Effects of Pretrial Publicity and inadmissible Testimony.” Personality and Social Psychology Bulletin 23: 1215–1226.

Friend, S. 2006. “Narrating the Truth (More or Less).” In Matthew Kieran and Dominic McIver Lopes (eds.) Knowing Art: Essays in Aesthetics and Epistemology, 35–49. Dordrecht: Springer.

Golomb, C. and Kuersten, R. 1996. “On the Transition from Pretense Play to Reality.” British Journal of Developmental Psychology 14: 203–217.

Green, M. C. & Brock, T.C. 2000. “The Role of Transportation in the Persuasiveness of Public Narrative.” Journal of Personality and Social Psychology 79: 701-21.

Hall, L., Johansson, P., & Strandberg, T. 2012. “Lifting the Veil of Morality: Choice Blindness and Attitude Reversals on a Self-Transforming Survey.” PloS One 7(9): e45457.

Hall, L., Strandberg, T., Pärnamets, P., Lind, A., Tärning, B. and Johansson, P. 2013. “How the Polls Can be Both Spot On and Dead Wrong: Using Choice Blindness to Shift Political Attitudes and Voter Intentions.” PLoS One 8(4): e60554.

Harris, P. L., Brown, E., Marriot, C., Whittal, S., & Harmer, S. 1991. “Monsters, Ghosts, and Witches: Testing the Limits of the Fantasy-Reality Distinction in Young Children.” Developmental Psychology 9: 105–123.

Holcombe, M. 2017. “Reading, Writing, Fighting Fake News.” CNN, March 29. https://goo.gl/5vNhZu.

Irvine, D. 2017. “USA Today Duped by North Korean Parody Twitter Account.” Accuracy in Media, March 31. https://goo.gl/zdhVYe.

Johansson, P., Hall, L., Sikström, S., & Olsson, A. 2005. “Failure to Detect Mismatches Between Intention and Outcome in a Simple Decision Task.” Science 310: 116–9.

Johnson, C. N. & Harris, P. L. 1994. “Magic: Special but not Excluded.” British Journal of Developmental Psychology 12: 35–51.

Kahan, D.M. 2015. “Climate-Science Communication and the Measurement Problem.” Advances in Political Psychology: 36 1-43.

Kang, J. 2012. “Communications Law: Bits of Bias.” In J. D. Levinson & R. J. Smith (Eds.), Implicit Racial Bias Across the Law (pp. 132-145). Cambridge, MA: Cambridge University Press.

Kharpal, A. 2017. “The Daily Mail has ‘mastered the art of running stories that aren’t true’, Wikipedia founder Jimmy Wales says.” CNBC, 19 May. https://goo.gl/rybqvx.

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. 2012. “Misinformation and its Correction: Continued Influence and successful Debiasing.” Psychological Science in the Public Interest 13: 106–131.

Lynch, M.P. 2016. “Fake News and the Internet Shell Game.” New York Times, November 28.

Maoz, I., A. Ward, M. Katz, and L. Ross 2002. “Reactive Devaluation of an ‘Israeli’ vs. ‘Palestinian’ Peace Proposal.” Journal of Conflict Resolution 46: 515-546.

Marsh, E.J. & Fazio, L.K. 2006. “Learning Errors from Fiction: Difficulties in Reducing Reliance on Fictional Stories.” Memory & Cognition 34: 1141-1149.

Marsh, E. J., Cantor, A. D, & Brashier, N. M. 2016. “Believing that Humans Swallow Spiders in their Sleep: False Beliefs as Side Effects of the Processes that Support Accurate Knowledge.” In B. Ross (Ed.) The Psychology of Learning and Motivation 64: 93-132.

McIntyre, L. 2015. Respecting Truth. New York: Routledge.

Nyhan, B. and Reifler J. 2010. When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior 32: 303-330.

Nyhan, B., Reifler, J., Richey, S. & Freed, G.L. 2014. “Effective Messages in Vaccine Promotion: A Randomized Trial.” Pediatrics 133: e835-e842.

Nyhan, B. and Reifler J. 2015. “Does Correcting Myths about the Flu Vaccine Work? An Experimental Evaluation of the Effects of Corrective Information.” Vaccine 33: 459-464.

O’Brien, E. J., Cook, A. E., & Guéraud, S. 2010. “Accessibility of Outdated Information.” Journal of Experimental Psychology: Learning, Memory, and Cognition 36: 979–991.

Oppenheimer, D.M. 2008. “The Secret Life of Fluency.” Trends in Cognitive Science 2: 237–241.

Orlando, J. 2017. “How to Help Kids Navigate Fake News and Misinformation Online.” ABC News, 26 June. https://goo.gl/BnSaaG.

Peter, C. & Koch, T. 2016. “When Debunking Scientific Myths Fails (and When It Does Not): The Backfire Effect in the Context of Journalistic Coverage and Immediate Judgments as Prevention Strategy.” Science Communication 38: 3-25.

Peters, K., Kashima, Y., & Clark, A. 2009. “Talking about Others: Emotionality and the Dissemination of Social Information.” European Journal of Social Psychology 39: 207–222.

Pratkanis, A.R., A.G. Greenwald, M.R. Leippe, and M.H. Baumgardner. 1988. “In Search of Reliable Persuasion Effects: III. The Sleeper Effect is Dead. Long Live the Sleeper Effect.” Journal of Personality and Social Psychology 54: 203–218.

Prentice, D.A., Gerrig, R.J. & Bailis, D.S. 1997. “What Readers Bring to the Processing of Fictional Texts.” Psychonomic Bulletin & Review 4: 416-420.

Prentice, D.A., Gerrig, R.J.1999. “Exploring the Boundary Between Fiction and Reality.” In Shelly Chailen & Yaacov Trope (eds.) Dual Process Theories in Social Psychology, 529-546. New York: Guilford Press.

Rapp, D. N., Hinze, S. R., Slaten, D. G., & Horton, W. S. 2014. “Amazing Stories: Acquiring and Avoiding Inaccurate Information from Fiction.” Discourse Processes 1–2: 50–74.

Rapp, D.N. 2016. “The Consequences of Reading Inaccurate Information.” Current Directions in Psychological Science 25: 281–285.

Reber, R., & Schwarz, N. 1999. “Effects of Perceptual Fluency on Judgments of Truth.” Consciousness and Cognition 8: 338–342.

Ross, L., Lepper, M. R. & Hubbard, M. 1975. “Perseverance in Self-Perceptions and Social Perception: Biased Attributional Processing in the Debriefing Paradigm.” Journal of Personality and Social Psychology 32: 880–892.

Schwarz, N., H. Bless, F. Strack, G. Klumpp, H. Rittenauer-Schatka & A. Simons 1991. “Ease of Retrieval as Information: Another Look at the Availability Heuristic.” Journal of Personality and Social Psychology 61: 195.

Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. 2007. “Metacognitive Experiences and the Intricacies of Setting People Straight: Implications for Debiasing and Public Information Campaigns.” Advances in Experimental Social Psychology 39: 127-161.

Schwarz, N, Newman E. J., & Leach, W. In press. “Making the Truth Stick and the Myths Fade: Lessons from Cognitive Psychology.” Behavioral Science and Policy.

Silverman, C. & Singer-Vine, J. 2016. “Most Americans Who See Fake News Believe It, New Survey Says.” Buzzfeed News, December 7. https://goo.gl/im9AF5.

Song, H., & Schwarz, N. 2008. “Fluency and the Detection of Distortions: Low Processing Fluency Attenuates the Moses Illusion.” Social Cognition 26: 791–799.

Stich, S. 1978. “Beliefs and Subdoxastic States.” Philosophy of Science 45: 499–518.

Strange, J.J. 2002. “How Fictional Tales Wag Real-World Beliefs.” In M.C. Green, J.J. Strange and T.C. Brock (eds.), Narrative Impact: Social and Cognitive Foundations, 263-86. Marwah, NJ: Erlbaum.

Suddendorf, T. & Corballis, M.C. 2008. “The Evolution of Foresight: What is Mental Time Travel and is it Unique to Humans?” Behavioural and Brain Sciences 30: 299–313.

Suddendorf, Thomas, Addis, Donna Rose and Corballis, Michael C. 2011. “Mental Time Travel and the Shaping of the Human Mind.” In Moshe Bar (ed.) Predictions in the Brain: Using our Past to Generate a Future, 344-354. New York: Oxford University Press.

Taber, C. S., & Lodge, M. 2006. “Motivated Skepticism in the Evaluation of Political Beliefs.” American Journal of Political Science 50: 755–769.

Thompson, V. A., Prowse Turner, J. & Pennycook, G. 2011. “Intuition, Reason, and Metacognition.” Cognitive Psychology 63: 107–140.

Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. 2007. “Inferring the Popularity of an Opinion from its Familiarity: A Repetitive Voice Can Sound Like a Chorus.” Journal of Personality and Social Psychology 92: 821-833.

Wegner, D.M, Coulton, G. F. & Wenzlaff, R. 1985. “The Transparency of Denial: Briefing in the Debriefing Paradigm. Journal of Personality and Social Psychology 49: 338–346 .

Weisberg, D.S. 2013. “Distinguishing Imagination from Reality.” In M. Taylor (ed) Oxford Handbook of the Development of Imagination. Oxford: Oxford UP, 75-93.

Wheeler, C., Green, M.C. & Brock, T.C. 1999. “Fictional Narratives Change Beliefs: Replications of Prentice, Gerrig, and Bailis (1997) with Mixed Corroboration.” Psychonomic Bulletin & Review 6: 136–141.

Wood, Thomas and Porter, Ethan. “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence (August 5, 2016).” Available at SSRN: https://ssrn.com/abstract=2819073.

[1] This definition is intended to fix the reference for discussion, not serve as a set of necessary and sufficient conditions. While there may be interesting philosophical work to do in settling difficult questions about whether a particular organization or a particular item is or is not an instance of fake news, this is not work I aim to undertake here. We can make a great deal of progress on both the theoretical and the practical challenges posed by fake news without settling these issues.

[2] It is difficult to find an explicit defence of this claim. I suspect, in fact, it is taken for granted to such an extent that it does not occur to most writers that it needs a defence. In addressing the dangers of fake news, however, they focus exclusively or near exclusively on the extent to which people are duped by it (see, for instance, Silverman & Singer-Vine 2016; McIntye 2015). Lynch (2016) expands the focus of concern slightly, from being taken in by fake news to becoming doubtful over its truth. On the other hand, the solution they propose for the problem is better fact checking and increased media literacy (Orlando 2017; Holcombe 2017).

[3] We acquire many beliefs about the world from reading fiction, but only some of those beliefs directly reflect the content of the claims made in the fiction. For example, from reading Tristram Shandy I might learn that 18th century novels are sometimes rather long, that they could be surprisingly bawdy and (putative) facts about Wellington’s battles. Only the last belief is a belief about the world outside the fiction that directly reflects the contents of the claims made in the fiction. The first reflects the formal properties of the novel; the second reflects its content but not directly (the book neither claims, nor implies, that 18th century novels could be bawdy).

[4] It is possible that the backfire effect is very much less common than many psychologists fear. Wood and Porter (2016) conducted 4 experiments with a large number of participants, and failed to produce a backfire effect for any item other than the Iraq WMDs correction. It is unclear, however, whether these experiments provide strong evidence against the backfire effect. First, Wood and Porter presented the claim to be corrected and the correction together, and probed for corrections immediately afterwards. The backfire effect seems to be strongest after a delay of at least several days (Peter and Koch 2016). The evidence may also be compatible with there being a strong backfire effect for corrections given at around the same time judgments are made. The reason is this: Wood and Porter deliberately aimed mainly at correcting a false impression that might arise from the (genuine) words of the politicians they aimed to correct, not at correcting the literal meaning of their claims. For example, they quote Hillary Clinton as saying “Between 88 and 92 people a day are killed by guns in America. It’s the leading cause of death for young black men, the second leading cause for young Hispanic men, the fourth leading cause for young white men. This epidemic of gun violence knows no boundaries, knows no limits, of any kind.” The correction given was: “In fact, according to the FBI, the number of gun homicides has fallen since the mid 1990s, declining by about 50% between 1994 and 2013.” Subjects were asked to agree or disagree on a five-point scale with “The number of gun homicides is currently at an all-time high”. Answering “disagree” to this question—that is, giving the answer that Wood and Porter take to be supported by the “correction”—is compatible with thinking that everything Clinton said was true (because her claims and the correction are logically compatible). Accepting the “correction” does not require one to disagree with someone with whom partisans might identify. It may be that the backfire effect concerning judgments made without the opportunity for memory dissociations is limited, or strongest, with regard to, directly conflicting statements. Bolstering this interpretation of the results reported by Wood and Porter is the fact that they replicated the backfire effect for the original WMDs in Iraq case, and subsequently eliminated the backfire effect by giving respondents an option which allowed them to accept the correction without contradicting the literal meaning of President Bush’s words. Finally, it should be noted that Wood and Porter’s corrections did not eliminate reliance on false information. The corrections they provided still left the most partisan quite firmly convinced—though somewhat less than they would otherwise have been—that the false implication was in fact true. Thus, they did not demonstrate the “steadfast factual adherence” of the title of their paper.

[5] I owe this point to Jason D’Cruz.
. It should be noted that there is to my knowledge no data on whether a partisan attitude of the kind described is protective; given that the discoveries made by cognitive science are sometimes counterintuitive, we cannot be very confident that the reasonable presumption that it is protective is true.

[6] Plausibly, these phenomena arise because fictions parasitize—or exapt—mechanisms designed for behavioural control. That is, the creation and consumption of fictional narrative utilizes machinery that evolved for assessing counterfactuals in the service of decision-making. Cognitive scientists refer to our capacity to reconstruct the past and construct the future as mental time travel (see Suddendorf & Corbalis 2008 for review of supporting evidence). This machinery is adaptive, because it allows us to utilize stored knowledge to prepare for future contingencies (Suddendorf, Addis & Corbalis 2011). It is this machinery, used offline, which is used for the simulation of counterfactuals and the construction of fictions for entertainment purposes. Because this machinery is designed to prepare us to respond adaptively, it is closely linked to action tendencies.

[7]  I am grateful to an audience at the Groupe de Recherche Interuniversitaire sur la Normativité, Montreal for helpful comments. Jason D’Cruz provided extensive and extremely helpful comments on a draft of the paper.