Archives For Climate Change

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Technology and Evil.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 1-14.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-466

A Russian Mil Mi-28 attack helicopter.
Image by Dmitri Terekhov via Flickr / Creative Commons

 

Humans cause immense damage to each other and to the environment. Steven James Bartlett argues that humans have an inbuilt pathology that leads to violence and ecosystem destruction that can be called evil, in a clinical rather than a religious sense. Given that technologies are human constructions, it follows that technologies can embody the same pathologies as humans. An important implication of Bartlett’s ideas is that studies of technology should be normative in opposing destructive technologies.

Introduction

Humans, individually and collectively, do a lot of terrible things to each other and to the environment. Some obvious examples are murder, torture, war, genocide and massive environmental destruction. From the perspective of an ecologist from another solar system, humans are the world’s major pestilence, spreading everywhere, enslaving and experimenting on a few species for their own ends, causing extinctions of numerous other species and destroying the environment that supports them all.

These thoughts suggest that humans, as a species, have been causing some serious problems. Of course there are many individuals and groups trying to make the world a better place, for example campaigning against war and environmental degradation, and fostering harmony and sustainability. But is it possible that by focusing on what needs to be done and on the positives in human nature, the seriousness of the dark side of human behaviour is being neglected?

Here, I address these issues by looking at studies of human evil, with a focus on a book by Steven Bartlett. With this foundation, it is possible to look at technology with a new awareness of its deep problems. This will not provide easy solutions but may give a better appreciation of the task ahead.

Background

For decades, I have been studying war, ways to challenge war, and alternatives to military systems (e.g. Martin, 1984). My special interest has been in nonviolent action as a means for addressing social problems. Along the way, this led me to read about genocide and other forms of violence. Some writing in the area refers to evil, addressed from a secular, scientific and non-moralistic perspective.

Roy Baumeister (1997), a prominent psychologist, wrote a book titled Evil: Inside Human Violence and Cruelty, that I found highly insightful. Studying the psychology of perpetrators, ranging from murderers and terrorists to killers in genocide, Baumeister concluded that most commonly they feel justified in their actions and see themselves as victims. Often they think what they’ve done is not that important. Baumeister’s sophisticated analysis aims to counter the popular perception of evil-doers as malevolent or uncaring.

Baumeister is one of a number of psychologists willing to talk about good and evil. If the word evil feels uncomfortable, then substitute “violence and cruelty,” as in the subtitle of Baumeister’s book, and the meaning is much the same. It’s also possible to approach evil from the viewpoint of brain function, as in Simon Baron-Cohen’s (2011) The Science of Evil: On Empathy and the Origins of Cruelty. There are also studies that combine psychiatric and religious perspectives, such as M. Scott Peck’s (1988) People of the Lie: The Hope for Healing Human Evil.

Another part of my background is technology studies, including being involved in the nuclear power debate, studying technological vulnerability, communication technology, and technology and euthanasia, among other topics. I married my interests in nonviolence and in technology by studying how technology could be designed and used for nonviolent struggle (Martin, 2001).

It was with this background that I encountered Steven James Bartlett’s (2005) massive book The Pathology of Man: A Study of Human Evil. Many of the issues it addresses, for example genocide and war, were familiar to me, but his perspective offered new and disturbing insights. The Pathology of Man is more in-depth and far-reaching than other studies I had encountered, and is worth bringing to wider attention.

Here, I offer an abbreviated account of Bartlett’s analysis of human evil. Then I spell out ways of applying his ideas to technology and conclude with some possible implications.

Bartlett on Evil

Steven James Bartlett is a philosopher and psychologist who for decades studied problems in human thinking. The Pathology of Man was published in 2005 but received little attention. This may partly be due to the challenge of reading an erudite 200,000-word treatise but also partly due to people being resistant to Bartlett’s message, for the very reasons expounded in his book.

In reviewing the history of disease theories, Bartlett points out that in previous eras a wide range of conditions were considered to be diseases, ranging from “Negro consumption” to anti-Semitism. This observation is part of his assessment of various conceptions of disease, relying on standard views about what counts as disease, while emphasising that judgements made are always relative to a framework that is value-laden.

This is a sample portion of Bartlett’s carefully laid out chain of logic and evidence for making a case that the human species is pathological, namely characteristic of a disease. In making this case, he is not speaking metaphorically but clinically. The fact that the human species has seldom been seen as pathological is due to humans adopting a framework that exempts themselves from this diagnosis, which would be embarrassing to accept, at least for those inclined to think of humans as the apotheosis of evolution.

Next stop: the concept of evil. Bartlett examines a wide range of perspectives, noting that most of them are religious in origin. In contrast, he prefers a more scientific view: “Human evil, in the restricted and specific sense in which I will use it, refers to apparently voluntary destructive behavior and attitudes that result in the general negation of health, happiness, and ultimately of life.” (p. 65) In referring to “general negation,” Bartlett is not thinking of a poor diet or personal nastiness but of bigger matters such as war, genocide and overpopulation.

Bartlett is especially interested in the psychology of evil, and canvasses the ideas of classic thinkers who have addressed this issue, including Sigmund Freud, Carl Jung, Karl Menninger, Erich Fromm and Scott Peck. This detailed survey has only a limited return: these leading thinkers have little to say about the origins of evil and what psychological needs it may serve.

So Bartlett turns to other angles, including Lewis Fry Richardson’s classic work quantifying evidence of human violence, and research on aggression by ethologists, notably Konrad Lorenz. Some insights come from this examination, including Richardson’s goal of examining human destructiveness without emotionality and Lorenz’s point that humans, unlike most other animals, have no inbuilt barriers to killing members of their own species.

Bartlett on the Psychology of Genocide

To stare the potential for human evil in the face, Bartlett undertakes a thorough assessment of evidence about genocide, seeking to find the psychological underpinning of systematic mass killings of other humans. He notes one important factor, a factor not widely discussed or even admitted: many humans gain pleasure from killing others. Two other relevant psychological processes are projection and splitting. Projection involves denying negative elements of oneself and attributing them to others, for example seeing others as dangerous, thereby providing a reason for attacking them: one’s own aggression is attributed to others.

Splitting involves dividing one’s own grandiose self-conception from the way others are thought of. “By belonging to the herd, the individual gains an inflated sense of power, emotional support, and connection. With the feeling of group-exaggerated power and puffed up personal importance comes a new awareness of one’s own identity, which is projected into the individual’s conception” of the individual’s favoured group (p. 157). As a member of a group, there are several factors that enable genocide: stereotyping, dehumanisation, euphemistic language and psychic numbing.

To provide a more vivid picture of the capacity for human evil, Bartlett examines the Holocaust, noting that it was not the only or most deadly genocide but one, partly due to extensive documentation, that provides plenty of evidence of the psychology of mass killing.

Anti-Semitism was not the preserve of the Nazis, but existed for centuries in numerous parts of the world, and indeed continues today. The long history of persistent anti-Semitism is, according to Bartlett, evidence that humans need to feel prejudice and to persecute others. But at this point there is an uncomfortable finding: most people who are anti-Semitic are psychologically normal, suggesting the possibility that what is normal can be pathological. This key point recurs in Bartlett’s forensic examination.

Prejudice and persecution do not usually bring sadness and remorse to the victimizers, but rather a sense of strengthened identity, pleasure, self-satisfaction, superiority, and power. Prejudice and persecution are Siamese twins: Together they generate a heightened and invigorated belief in the victimizers’ supremacy. The fact that prejudice and persecution benefit bigots and persecutors is often overlooked or denied. (p. 167)

Bartlett examines evidence about the psychology of several groups involved in the Holocaust: Nazi leaders, Nazi doctors, bystanders, refusers and resisters. Nazi leaders and doctors were, for the most part, normal and well-adjusted men (nearly all were men). Most of the leaders were above average intelligence, and some had very high IQs, and many of them were well educated and culturally sophisticated. Cognitively they were superior, but their moral intelligence was low.

Bystanders tend to do nothing due to conformity, lack of empathy and low moral sensibility. Most Germans were bystanders to Nazi atrocities, not participating but doing nothing to oppose them.

Next are refusers, those who declined to be involved in atrocities. Contrary to usual assumptions, in Nazi Germany there were few penalties for refusing to join killings; it was just a matter of asking for a different assignment. Despite this, of those men called up to join killing brigades, very few took advantage of this option. Refusers had to take some initiative, to think for themselves and resist the need to conform.

Finally, there were resisters, those who actively opposed the genocide, but even here Bartlett raises a concern, saying that in many cases resisters were driven more by anger at offenders than empathy with victims. In any case, in terms of psychology, resisters were the odd ones out, being disengaged with the dominant ideas and values in their society and being able to be emotionally alone, without peer group support. Bartlett’s concern here meshes with research on why people join contemporary social movements: most first become involved via personal connections with current members, not because of moral outrage about the issue (Jasper, 1997).

The implication of Bartlett’s analysis of the Holocaust is that there is something wrong with humans who are psychologically normal (see also Bartlett, 2011, 2013). When those who actively resist genocide are unusual psychologically, this points to problems with the way most humans think and feel.

Another one of Bartlett’s conclusions is that most solutions that have been proposed to the problem of genocide — such as moral education, cultivating acceptance and respect, and reducing psychological projection — are vague, simplistic and impractical. They do not measure up to the challenge posed by the observed psychology of genocide.

Bartlett’s assessment of the Holocaust did not surprise me because, for one of my studies of tactics against injustice (Martin, 2007), I read a dozen books and many articles about the 1994 Rwandan genocide, in which between half a million and a million people were killed in the space of a few months. The physical differences between the Tutsi and Hutu are slight; the Hutu killers targeted both Tutsi and “moderate” Hutu. It is not widely known that Rwanda is the most Christian country in Africa, yet many of the killings occurred in churches where Tutsi had gone for protection. In many cases, people killed neighbours they had lived next to for years, or even family members. The Rwandan genocide had always sounded horrific; reading detailed accounts to obtain examples for my article, I discovered it was far worse than I had imagined (Martin, 2009).

After investigating evidence about genocide and its implications about human psychology, Bartlett turns to terrorism. Many of his assessments accord with critical terrorism studies, for example that there is no standard definition of terrorism, the fear of terrorism is disproportionate to the threat, and terrorism is “framework-relative” in the sense that calling someone a terrorist puts you in opposition to them.

Bartlett’s interest is in the psychology of terrorists. He is sceptical of the widespread assumption that there must be something wrong with them psychologically, and cites evidence that terrorists are psychologically normal. Interestingly, he notes that there are no studies comparing the psychologies of terrorists and soldiers, two groups that each use violence to serve a cause. He also notes a striking absence: in counterterrorism writing, no one has studied the sorts of people who refuse to be involved in cruelty and violence and who are resistant to appeals to in-group prejudice, which is usually called loyalty or patriotism. By assuming there is something wrong with terrorists, counterterrorism specialists are missing the possibility of learning how to deal with the problem.

Bartlett on War Psychology

Relatively few people are involved in genocide or terrorism except by learning about them via media stories. It is another matter when it comes to war, because many people have lived through a time when their country has been at war. In this century, just think of Afghanistan, Iraq and Syria, where numerous governments have sent troops or provided military assistance.

Bartlett says there is plenty of evidence that war evokes powerful emotions among both soldiers and civilians. For some, it is the time of life when they feel most alive, whereas peacetime can seem boring and meaningless. Although killing other humans is proscribed by most moral systems, war is treated as an exception. There are psychological preconditions for organised killing, including manufacturing differences, dehumanising the enemy, nationalism, group identity and various forms of projection. Bartlett says it is also important to look at psychological factors that prevent people from trying to end wars.

Even though relatively few people are involved in war as combat troops or even as part of the systems that support war-fighting, an even smaller number devote serious effort to trying to end wars. Governments collectively spend hundreds of billions of dollars on their militaries but only a minuscule amount on furthering the causes of peace. This applies as well to research: there is a vastly more military-sponsored or military-inspired research than peace-related research. Bartlett concludes that, “war is a pathology which the great majority of human beings do not want to cure” (p. 211).

Thinking back over the major wars in the past century, in most countries it has been far easier to support war than to oppose it. Enlisting in the military is seen as patriotic whereas refusing military service, or deserting the army, is seen as treasonous. For civilians, defeating the enemy is seen as a cause for rejoicing, whereas advocating an end to war — except via victory — is a minority position.

There have been thousands of war movies: people flock to see killing on the screen, and the bad guys nearly always lose, especially in Hollywood. In contrast, the number of major films about nonviolent struggles is tiny — what else besides the 1982 film Gandhi? — and seldom do they attract a wide audience. Bartlett sums up the implications of war for human psychology:

By legitimating the moral atrocity of mass murder, war, clothed as it is in the psychologically attractive trappings of patriotism, heroism, and the ultimately good cause, is one of the main components of human evil. War, because it causes incalculable harm, because it gives men and women justification to kill and injure one another without remorse, because it suspends conscience and neutralizes compassion, because it takes the form of psychological epidemics in which dehumanization, cruelty, and hatred are given unrestrained freedom, and because it is a source of profound human gratification and meaning—because of these things, war is not only a pathology, but is one of the most evident expressions of human evil. (p. 225)

The Obedient Parasite

Bartlett next turns to obedience studies, discussing the famous research by Stanley Milgram (1974). However, he notes that such studies shouldn’t even be needed: the evidence of human behaviour during war and genocide should be enough to show that most human are obedient to authority, even when the authority is instructing them to harm others.

Another relevant emotion is hatred. Although hating is a widespread phenomenon — most recently evident in the phenomenon of online harassment (Citron, 2014) — Bartlett notes that psychologists and psychiatrists have given this emotion little attention. Hatred serves several functions, including providing a cause, overcoming the fear of death, and, in groups, helping build a sense of community.

Many people recognise that humans are destroying the ecological web that supports their own lives and those of numerous other species. Bartlett goes one step further, exploring the field of parasitology. Examining definitions and features of parasites, he concludes that, according to a broad definition, humans are parasites on the environment and other species, and are destroying the host at a record rate. He sees human parasitism as being reflected in social belief systems including the “cult of motherhood,” infatuation with children, and the belief that other species exist to serve humans, a longstanding attitude enshrined in some religions.

Reading The Pathology of Man, I was tempted to counter Bartlett’s arguments by pointing to the good things that so many humans have done and are doing, such as everyday politeness, altruism, caring for the disadvantaged, and the animal liberation movement. Bartlett could counter by noting it would be unwise to pay no attention to disease symptoms just because your body has many healthy parts. If there is a pathology inherent in the human species, it should not be ignored, but instead addressed face to face.

Remington 1858 Model Navy .36 Cap and Ball Revolver.
Image by Chuck Coker via Flickr / Creative Commons

 

Technologies of Political Control

Bartlett’s analysis of human evil, including that violence and cruelty are perpetrated mostly by people who are psychologically normal and that many humans obtain pleasure out of violence against other humans, can be applied to technology. The aim in doing this is not to demonise particular types or uses of technology but to explore technological systems from a different angle in the hope of providing insights that are less salient from other perspectives.

Consider “technologies of political control,” most commonly used by governments against their own people (Ackroyd et al., 1974; Wright, 1998). These technologies include tools of torture and execution including electroshock batons, thumb cuffs, restraint chairs, leg shackles, stun grenades and gallows. They include technologies used against crowds such as convulsants and infrasound weapons (Omega Foundation, 2000). They include specially designed surveillance equipment.

In this discussion, “technology” refers not just to artefacts but also to the social arrangements surrounding these artefacts, including design, manufacture, and contexts of use. To refer to “technologies of political control” is to invoke this wider context: an artefact on its own may seem innocuous but still be implicated in systems of repression. Repression here refers to force used against humans for the purposes of harm, punishment or social control.

Torture has a long history. It must be considered a prime example of human evil. Few species intentionally inflict pain and suffering on other members of their own species. Among humans, torture is now officially renounced by every government in the world, but it still takes place in many countries, for example in China, Egypt and Afghanistan, as documented by Amnesty International. Torture also takes place in many conventional prisons, for example via solitary confinement.

To support torture and repression, there is an associated industry. Scientists design new ways to inflict pain and suffering, using drugs, loud noises, disorienting lights, sensory deprivation and other means. The tools for delivering these methods are constructed in factories and the products marketed around the world, especially to buyers seeking means to control and harm others. Periodically, “security fairs” are held in which companies selling repression technologies tout their products to potential buyers.

The technology of repression does not have a high profile, but it is a significant industry, involving tens of billions of dollars in annual sales. It is a prime cause of human suffering. So what are people doing about it?

Those directly involved seem to have few moral objections. Scientists use their skills to design more sophisticated ways of interrogating, incarcerating and torturing people. Engineers design the manufacturing processes and numerous workers maintain production. Sales agents tout the technologies to purchasers. Governments facilitate this operation, making extraordinary efforts to get around attempts to control the repression trade. So here is an entire industry built around technologies that serve to control and harm defenceless humans, and it seems to be no problem to find people who are willing to participate and indeed to tenaciously defend the continuation of the industry.

In this, most of the world’s population are bystanders. Mass media pay little attention. Indeed, there are fictional dramas that legitimise torture and, more generally, the use of violence against the bad guys. Most people remain ignorant of the trade in repression technologies. For those who learn about it, few make any attempt to do something about it, for example by joining a campaign.

Finally there are a few resisters. There are groups like the Omega Research Foundation that collect information about the repression trade and organisations like Amnesty International and Campaign Against Arms Trade that campaign against it. Journalists have played an important role in exposing the trade (Gregory, 1995).

The production, trade and use of technologies of repression, especially torture technologies, provide a prime example of how technologies can be implicated in human evil. They illustrate quite a few of the features noted by Bartlett. There is no evidence that the scientists, engineers, production workers, sales agents and politician allies of the industry are anything other than psychologically normal. Indeed, it is an industry organised much like any other, except devoted to producing objects used to harm humans.

Nearly all of those involved in the industry are simply operating as cogs in a large enterprise. They have abdicated responsibility for causing harm, a reflection of humans’ tendency to obey authorities. As for members of the public, the psychological process of projection provides a reassuring message: torture is only used as a last result against enemies such as terrorists. “We” are good and “they” are bad, so what is done to them is justified.

Weapons and Tobacco

Along with the technology of repression, weapons of war are prime candidates for being understood as implicated in evil. If war is an expression of the human potential for violence, then weapons are a part of that expression. Indeed, increasing the capacity of weapons to maim, kill and destroy has long been a prime aim of militaries. So-called conventional weapons include everything from bullets and bayonets to bombs and ballistic missiles, and then there are biological, chemical and nuclear weapons.

Studying weaponry is a way of learning about the willingness of humans to use their ingenuity to harm other humans. Dum-dum bullets were designed to tumble in flight so as to cause more horrendous injuries on exiting a body. Brightly coloured land mines can be attractive to young children. Some of these weapons have been banned, while others take their place. In any case, it is reasonable to ask, what was going through the minds of those who conceived, designed, manufactured, sold and deployed such weapons?

The answer is straightforward, yet disturbing. Along the chain, individuals may have thought they were serving their country’s cause, helping defeat an enemy, or just doing their job and following orders. Indeed, it can be argued that scientific training and enculturation serve to develop scientists willing to work on assigned tasks without questioning their rationale (Schmidt, 2000).

Nuclear weapons, due to their capacity for mass destruction, have long been seen as especially bad, and there have been significant mass movements against these weapons (Wittner, 1993–2003). However, the opposition has not been all that successful, because there continue to be thousands of nuclear weapons in the arsenals of eight or so militaries, and most people seldom think about it. Nuclear weapons exemplify Bartlett’s contention that most people do not do much to oppose war — even a war that would devastate the earth.

Consider something a bit different: cigarettes. Smoking brings pleasure, or at least relief from craving, to hundreds of millions of people daily, at the expense of a massive death toll (Proctor, 2011). By current projections, hundreds of millions of people will die this century from smoking-related diseases.

Today, tobacco companies are stigmatised and smoking is becoming unfashionable — but only in some countries. Globally, there are ever more smokers and ever more victims of smoking-related illnesses. Cigarettes are part of a technological system of design, production, distribution, sales and use. Though the cigarette itself is less complex than many military weapons, the same questions can be asked of everyone involved in the tobacco industry: how can they continue when the evidence of harm is so overwhelming? How could industry leaders spend decades covering up their own evidence of harm while seeking to discredit scientists and public health officials whose efforts threatened their profits?

The answers draw on the same psychological processes involved in the perpetuation of violence and cruelty in more obvious cases such as genocide, including projection and obedience. The ideology of the capitalist system plays a role too, with the legitimating myths of the beneficial effects of markets and the virtue of satisfying consumer demand.

For examining the role of technology in evil, weapons and cigarettes are easy targets for condemnation. A more challenging case is the wide variety of technologies that contribute to greenhouse gas emissions and hence to climate change, with potentially catastrophic effects for future generations and for the biosphere. The technologies involved include motor vehicles (at least those with internal combustion engines), steel and aluminum production, home heating and cooling, and the consumption of consumer goods. The energy system is implicated, at least the part of it predicated on carbon-based fuels, and there are other contributors as well such as fertilisers and clearing of forests.

Most of these technologies were not designed to cause harm, and those involved as producers and consumers may not have thought of their culpability for contributing to future damage to the environment and human life. Nevertheless, some individuals have greater roles and responsibilities. For example, many executives in fossil fuel companies and politicians with the power to reset energy priorities have done everything possible to restrain shifting to a sustainable energy economy.

Conceptualising the Technology of Evil

If technologies are implicated in evil, what is the best way to understand the connection? It could be said that an object designed and used for torture embodies evil. Embodiment seems appropriate if the primary purpose is for harm and the main use is for harm, but seldom is this sort of connection exclusive of other uses. A nuclear weapon, for example, might be used as an artwork, a museum exhibit, or a tool to thwart a giant asteroid hurtling towards earth.

Another option is to say that some technologies are “selectively useful” for harming others: they can potentially be useful for a variety of purposes but, for example, easier to use for torture than for brain surgery or keeping babies warm. To talk of selective usefulness instead of embodiment seems less essentialist, more open to multiple interpretations and uses.

Other terms are “abuse” and “misuse.” Think of a cloth covering a person’s face over which water is poured to give a simulation of drowning, used as a method of torture called waterboarding. It seems peculiar to say that the wet cloth embodies evil given that it is only the particular use that makes it a tool to cause harm to humans. “Abuse” and “misuse” have an ignominious history in the study of technology because they are often based on the assumption that technologies are inherently neutral. Nevertheless, these terms might be resurrected in speaking of the connection between technology and evil when referring to technologies that were not designed to cause harm and are seldom used for that purpose.

Consider next the role of technologies in contributing to climate change. For this, it is useful to note that most technologies have multiple uses and consequences. Oil production, for example, has various immediate environmental and health impacts. Oil, as a product, has multitudinous uses, such as heating houses, manufacturing plastics and fuelling military aircraft. The focus here is on a more general impact via the waste product carbon dioxide that contributes to global warming. In this role, it makes little sense to call oil evil in itself.

Instead, it is simply one player in a vast network of human activities that collectively are spoiling the environment and endangering future life on earth. The facilitators of evil in this case are the social and economic systems that maintain dependence on greenhouse gas sources and the psychological processes that enable groups and individuals to resist a shift to sustainable energy systems or to remain indifferent to the issue.

For climate change, and sustainability issues more generally, technologies are implicated as part of entrenched social institutions, practices and beliefs that have the potential to radically alter or destroy the conditions for human and non-human life. One way to speak of technologies in this circumstance is as partners. Another is to refer to them as actors or actants, along the lines of actor-network theory (Latour, 1987), though this gives insufficient salience to the psychological dimensions involved.

Another approach is to refer to technologies as extensions of humans. Marshall McLuhan (1964) famously described media as “extensions of man.” This description points to the way technologies expand human capabilities. Vehicles expand human capacities for movement, otherwise limited to walking and running. Information and communication technologies expand human senses of sight, hearing and speaking. Most relevantly here, weapons expand human capacities for violence, in particular killing and destruction. From this perspective, humans have developed technologies to extend a whole range of capacities, some of them immediately or indirectly harmful.

In social studies of technology, various frameworks have been used, including political economy, innovation, social shaping, cost-benefit analysis and actor-network theory. Each has advantages and disadvantages, but none of the commonly used frameworks emphasises moral evaluation or focuses on the way some technologies are designed or used for the purpose of harming humans and the environment.

Implications

The Pathology of Man is a deeply pessimistic and potentially disturbing book. Probing into the psychological foundations of violence and cruelty shows a side of human behaviour and thinking that is normally avoided. Most commentators prefer to look for signs of hope, and would finish a book such as this with suggestions for creating a better world. Bartlett, though, does not want to offer facile solutions.

Throughout the book, he notes that most people prefer not to examine the sources of human evil, and so he says that hope is actually part of the problem. By continually being hopeful and looking for happy endings, it becomes too easy to avoid looking at the diseased state of the human mind and the systems it has created.

Setting aside hope, nevertheless there are implications that can be derived from Bartlett’s analysis. Here I offer three possible messages regarding technology.

Firstly, if it makes sense to talk about human evil in a non-metaphorical sense, and to trace the origins of evil to features of human psychology, then technologies, as human creations, are necessarily implicated in evil. The implication is that a normative analysis is imperative. If evil is seen as something to be avoided or opposed, then likewise those technologies most closely embodying evil are likewise to be avoided or opposed. This implies making judgements about technologies. In technologies studies, this already occurs to some extent. However, common frameworks, such as political economy, innovation and actor-network theory, do not highlight moral evaluation.

Medical researchers do not hesitate to openly oppose disease, and in fact the overcoming of disease is an implicit foundation of research. Technology studies could more openly condemn certain technologies.

Secondly, if technology is implicated in evil, and if one of the psychological processes perpetuating evil is a lack of recognition of it and concern about it, there is a case for undertaking research that provides insights and tools for challenging the technology of evil. This has not been a theme in technology studies. Activists against torture technologies and military weaponry would be hard pressed to find useful studies or frameworks in the scholarship about technology.

One approach to the technology of evil is action research (McIntyre 2008; Touraine 1981), which involves combining learning with efforts towards social change. For example, research on the torture technology trade could involve trying various techniques to expose the trade, seeing which ones are most fruitful. This would provide insights about torture technologies not available via conventional research techniques.

Thirdly, education could usefully incorporate learning about the moral evaluation of technologies. Bartlett argues that one of the factors facilitating evil is the low moral development of most people, as revealed in the widespread complicity in or complacency about war preparation and wars, and about numerous other damaging activities.

One approach to challenging evil is to increase people’s moral capacities to recognise and act against evil. Technologies provide a convenient means to do this, because human-created objects abound in everyday life, so it can be an intriguing and informative exercise to figure out how a given object relates to killing, hatred, psychological projection and various other actions and ways of thinking involved in violence, cruelty and the destruction of the foundations of life.

No doubt there are many other ways to learn from the analysis of human evil. The most fundamental step is not to turn away but to face the possibility that there may be something deeply wrong with humans as a species, something that has made the species toxic to itself and other life forms. While it is valuable to focus on what is good about humans, to promote good it is also vital to fully grasp the size and depth of the dark side.

Acknowledgements

Thanks to Steven Bartlett, Lyn Carson, Kurtis Hagen, Kelly Moore and Steve Wright for valuable comments on drafts.

Contact details: bmartin@uow.edu.au

References

Ackroyd, Carol, Margolis, Karen, Rosenhead, Jonathan, & Shallice, Tim (1977). The technology of political control. London: Penguin.

Baron-Cohen, Simon (2011). The science of evil: On empathy and the origins of cruelty. New York: Basic Books.

Bartlett, Steven James (2005). The pathology of man: A study of human evil. Springfield, IL: Charles C. Thomas.

Bartlett, Steven James (2011). Normality does not equal mental health: the need to look elsewhere for standards of good psychological health. Santa Barbara, CA: Praeger.

Bartlett, Steven James (2013). The dilemma of abnormality. In Thomas G. Plante (Ed.), Abnormal psychology across the ages, volume 3 (pp. 1–20). Santa Barbara, CA: Praeger.

Baumeister, Roy F. (1997). Evil: Inside human violence and cruelty. New York: Freeman.

Citron, D.K. (2014). Hate crimes in cyberspace. Cambridge, MA: Harvard University Press.

Gregory, Martyn (director and producer). (1995). The torture trail [television]. UK: TVF.

Jasper, James M. (1997). The art of moral protest: Culture, biography, and creativity in social movements. Chicago: University of Chicago Press.

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.

Martin, Brian (1984). Uprooting war. London: Freedom Press.

Martin, Brian (2001). Technology for nonviolent struggle. London: War Resisters’ International.

Martin, Brian (2007). Justice ignited: The dynamics of backfire. Lanham, MD: Rowman & Littlefield.

Martin, Brian (2009). Managing outrage over genocide: case study Rwanda. Global Change, Peace & Security, 21(3), 275–290.

McIntyre, Alice (2008). Participatory action research. Thousand Oaks, CA: Sage.

McLuhan, Marshall (1964). Understanding media: The extensions of man. New York: New American Library.

Milgram, Stanley (1974). Obedience to authority. New York: Harper & Row.

Omega Foundation (2000). Crowd control technologies. Luxembourg: European Parliament.

Peck, M. Scott (1988). People of the lie: The hope for healing human evil. London: Rider.

Proctor, Robert N. (2011). Golden holocaust: Origins of the cigarette catastrophe and the case for abolition. Berkeley, CA: University of California Press.

Schmidt, Jeff (2000). Disciplined minds: A critical look at salaried professionals and the soul-battering system that shapes their lives. Lanham, MD: Rowman & Littlefield.

Touraine, Alain (1981). The voice and the eye: An analysis of social movements. Cambridge: Cambridge University Press.

Wittner, Lawrence S. (1993–2003). The struggle against the bomb, 3 volumes. Stanford, CA: Stanford University Press.

Wright, Steve (1998). An appraisal of technologies of political control. Luxembourg: European Parliament.

Author Information: Eric Kerr, National University of Singapore, eric.kerr@nus.edu.sg.

Kerr, Eric. “On Thinking With Catastrophic Times.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 46-49.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45Q

Image by Jeff Krause via Flickr / Creative Commons

 

Reprinted with permission from the Singapore Review of Books. The original review can be found here.

• • • •

On Thinking With – Scientists, Sciences, and Isabelle Stengers is the transcription of a talk read by Jeremy Fernando at the Centre for Science & Innovation Studies at UC Davis in 2015. The text certainly has the character of a reading: through closely attending to Stengers’ similarly transcribed talk (2012) Fernando traverses far-reaching themes – testimony, the gift, naming, listening – drawing them into a world made strange again through Stengers’ idea of “thinking with” – as opposed to analyzing or evaluating – notions of scientific progress, justice, and responsibility.

All this will make this review rather different from convention. I’ll attempt a response, using the text as an opportunity to pause, regroup, and divert, which, I hope, will allow us to see some of the connections between the two scholars and the value of this book. I read this text as a philosopher within Science and Technology Studies (STS) and through these lenses I’ll aim to draw out some of the ideas elaborated in Fernando’s essay and in Stengers’ In Catastrophic Times.

Elusive Knowledge

Towards the end of the essay, Fernando muses on the elusive nature of knowledge: “[T]he moment the community of scientists knows – or thinks it knows – what Science is, the community itself dissolves” (p.35). He consequently ties epistemological certainty to the stagnation, or even the collapse, of a scientific community.

In this sense, Fernando suggests that the scientific community should be thought of as a myth, but a necessary one. He implies that any scientific community is a “dream community… a dream in the sense of something unknown, something slightly beyond the boundaries, binds, of what is known.” (pp. 35-36) Further, he agrees with Stengers: “I vitally need such a dream, such a story which never happened.” So why? What is this dream that is needed?

Stengers suggests that we are now in a situation where there are “many manners of dying” (2015, p. 9). Any attempt on “our” part to resolve the growing crisis, seems to merely entrench and legislate the same processes that produced the very problems we were trying to overcome. International agreements are framed within the problematic capitalocene rather than challenging it. Problems arrive with the overwhelming sense that our current situation is permanent, political change is inertial or even immovable, and that the only available remedy is more of the poison. Crucially, for Stengers, this sense is deliberately manufactured – an induced ignorance (ibid. p. 118).

Stengers’ concern, which Fernando endorses, is to reframe the manner in which problems are presented. To remove us from the false binary choice presented to us: as precaution or pro-action, as self-denial of consumer products or geoengineering, as deforestation for profit or financialization of forests. For his part, Fernando does not offer more solutions. Instead, he encourages us to sit in the mire of the problem, to revisit it, to rethink it, to re-view it. Not as an act of idle pontification but for what Stengers calls “paying attention” (ibid. p. 100).

Paying Attention to Catastrophic Times

In order to pay attention, Fernando begins with a parental metaphor: Gaia as mother, scientific authority as father. For him, there is an important distinction between power and authority. Whereas power can be found in all relations, authority “is mystical, divine, outside the realm of human consciousness – it is the order of the sovereign. One either has authority or one doesn’t” (p.21).

Consequently, there is something unattainable about any claim to scientific expertise. The idea that authority depends on a mystical or theological grounding chimes with core epistemological commitments in STS, most forcefully advocated by David Bloor who argued that the absolutist about knowledge would require “epistemic grace”.

Alongside Fernando’s words, Burdock details gooey, veiny appendages emerging from pipes and valves, tumours and pustules evoking the diseased body. Science and engineering are productive of vulnerable bodies. Here we might want to return to Stengers’ treatment of the pharmakon, the remedy/poison duality.

For Stengers, following Nietzsche’s gay scientist (whom Fernando also evokes), skepticism and doubt are pharmakon (Nietzsche 1924, p. 159). She details how warnings as to the dangers of potential responses are presented as objections. STS scholars will note that this uncertainty can be activated by both your enemies and your friends, not least when it comes to the challenges of climate change. This is the realization that prompted Bruno Latour to issue what Steve Fuller has called a “mea culpa on behalf of STS” for embracing too much uncertainty (Latour 2004; Fuller 2018, p. 44).

Data and Gaia

Although there is little mention of any specific sciences, scientific instruments, theories or texts, Fernando instead focuses on what is perhaps the primary object of contemporary science – data – especially its relation to memory. It is perhaps not a coincidence that he repeatedly asks us to remember not to forget: e.g. “we should try not to forget that…” (p. 11 and similar on p. 17, 22, 21, and 37). He notes that testimony occurs through memory but that this is, generally speaking, unreliable and incomplete. His conclusion is Cartesian: perhaps the only thing we can know for sure is that we are testifying (p. 16).

Stengers picks up the question of memory in her dismissal of an interventionist Gaia (to paraphrase Nick Cave) denying that Gaia could remember, could be offended or could care who is responsible (2015 p.46 and fn. 2). She criticizes James Lovelock, the author of the Gaia hypothesis, for speaking of Gaia’s “revenge”. While he begins his text with Stengers’ controversial allusion to Gaia, Fernando’s discussion of data also has a curious connection to a living, self-regulating (and consequently also possibly a vulnerable) globe.

Riffing on Stewart Brand’s infamous phrase, “information wants to be free,” Fernando writes, “[D]ata and sharing have always been in relation with each other, data has always already been open source. Which also means that data – sharing, transference – always entail an openness to the possibility of another; along with the potentiality for disruption, infection, viruses, distortion” (p.22). Coincidentally, along with being an internet pioneer, founding one of the oldest virtual (and certainly mythological) communities, Brand is an old friend of Lovelock.

Considering these words in relation to impending ecological disaster, I’m inclined to think that perhaps the central myth that we should try to escape is that we don’t easily forget. Bernard Stiegler has suggested that we are in a period of realignment in our relationship to memory in which external memory supports are the primary means by which we understand our temporality (2011, 2013).

Similarly, we might think that it is no coincidence that when Andy Clark and David Chalmers proposed their hypothesis of extended cognition, the idea that our cognitive and memorial processes extend into artefacts, they reached for the Alzheimer’s sufferer as “Patient Zero” (1998). In truth, we do forget, often. And this is despite, and sometimes even because of, our best efforts to record and archive and remember.

Fernando’s writing is, at root, a call to re-call. It regenerates other texts and seems to live with them such that they both thrive. The “tales” he calls for spiral out into new mutations like Burdock’s tentacular images. But to reduce Fernando’s scope to simply a call for other perspectives would be to sell it short. Read alongside In Catastrophic Times, the call to embrace uncertainty and to reckon with it becomes more urgent.

Fernando reminds us of our own forgetfulness and the unreliability of our testimony about ourselves and our communities. For those of us wrestling with the post-truth world, Fernando’s essay is both a palliative and, potentially, charts a way out of no-alternative thinking.

Contact details: eric.kerr@nus.edu.sg

References

Bloor, D. 2007. Epistemic grace: Antirelativism as theology in disguise. Common knowledge 13: 250-280.

Clark, A. and D. Chalmers. 1998. The extended mind. Analysis 58: 7–19.

Fuller, S. 2018. Post-Truth: Knowledge as a Power Game. Anthem Press.

Latour, B. 2004. Why Has Critique Run out of Steam?  From Matters of Fact to Matters of Concern Critical Inquiry 2004 30(2).

Nietzsche, F. 1924. The Joyful Wisdom (trans. T. Common) New York: The MacMillan Company. Accessed 10 June 2018. https://ia600300.us.archive.org/9/items/completenietasch10nietuoft/completenietasch10nietuoft.pdf.

Stengers, I. 2012. “Cosmopolitics: Learning to Think with Sciences, Peoples and Natures.” Public lecture. Situating Science Knowledge Cluster. St. Marys, Halifax, Canada, 5 March 2012. Accessed 10 June 2018. http://www.youtube.com/watch?v=-ASGwo02rh8.

Stengers, I. 2015. In Catastrophic Times: Resisting the coming Barbarism. Open Humanities Press/Meson Press.

Stiegler, B. 2011. Technics and Time, 3: Cinematic Time and the Question of Malaise (trans. R. Beardsworth and G. Collins). Stanford: Stanford University Press.

Stiegler, B. 2013. For a New Critique of Political Economy (trans. D. Ross). Cambridge: Polity.

Author Information: Steve Fuller, University of Warwick, s.w.fuller@warwick.ac.uk.

Fuller, Steve. “Staying Human in the 21st Century Is Harder Than You Might Think.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 39-42.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44W

The Main Street Bridge in Columbus, Ohio, the largest major city near Ashland University.
Image by Bill Koontz via Flickr / Creative Commons

 

Let me start by saying that it’s a great honour to address you today.[1] It turns out that nearly forty years ago, I was the ‘salutatorian’ of the Class of 1979 at Columbia College in New York. That means I was the number two guy in terms of overall grade point average across all subjects. And that guy gives the introductory speech, whereas the number one guy gives the closing speech, the so-called ‘valedictory’ address, which literally means saying goodbye.

It seems that once again I am the ‘warm up act’ for a graduation ceremony in that once I finish speaking, you’ll actually get your degrees! And that’s exactly was how it worked in the old days.

I am someone who thinks that if I have anything interesting to say, it will be to those who are more oriented to the future than to the past – or even the present. In any case, this is how I would wish you to interpret me.

There are many challenges to our sense of humanity today. I want to start with a long term challenge that you will increasingly face in the coming years. That has to do with privileging ‘humanity’ understood as a kind of upright ape who has consolidated its place on Earth by monopolizing control over the planet’s resources. This is what geologists are beginning to call the ‘Anthropocene’, and it probably began with the Industrial Revolution in the late eighteenth century – and it marks the first time a single species has dictated the terms of engagement on Earth.

This has led to considerable metaphysical soul-searching about the human condition. And put bluntly, much of this soul-searching has resulted in self-loathing for humanity as a species. We are to blame for the unprecedented levels of mass extinctions and climate change over the past two or more centuries.

All the while, our species has come to range over the entire planet in a manner that reminds some of these scientifically informed misanthropes of cockroaches. However, the difference between us and cockroaches is that cockroaches don’t seem to exhibit the strong sense of inequality among its members that we have historically insisted among ourselves.

So given our evolutionary track record, in what sense are humans are worth promoting, let alone all of us – as Thomas Jefferson said, as ‘created equal’? Of course, it’s been long recognized that there is an enormous spread in the capacity of human beings. Modern biological science has given this informal observation an empirical basis.

Originally it was presented as a demonstration of natural inequality, and the phrase ‘scientific racism’ remains a legacy of that line of thought. However, nowadays biologists prefer to speak of the diversity of life-forms, which together constitute ecologies, the Earth itself being the ultimate ecosystem.

But against this general current of thought, egalitarianism has been advanced by the Abrahamic religions – Judaism, Christianity and Islam – simply on the grounds that we are all children of the same God in some broadly ‘privileged’ sense.

In the modern era, this fundamental intuition was given focus by the classical idea of republicanism, namely, that a society should be constituted only by those who regard each other as equals. And what makes and keeps people equal are the standards by which they are judged – and this is determined by the people themselves. And that was what was meant by the res publica – the ‘public thing’, in Latin.

It’s worth recalling that prior to the US Constitution, republics had been small enclaves of the few who regarded each other as equals. Think about Athens, Rome, Venice and the Netherlands in their republican phases. Basically, they were places for rich migrants.

So What Happens If The Migrants Aren’t Rich?

In law, there are two general ways in which people can become citizens. One is called jus soli, and refers to the land in which you yourself were born, and the other is called jus sanguinis, and refers to the birth of your parents. If you look at a map of the world today, you’ll see that jus soli dominates the Western hemisphere and jus sanguinis dominates the Eastern hemisphere. And that’s because the Western hemisphere – this hemisphere of North and South America – has been seen as a natural place for migrants.

However, candidates for citizenship in a republic typically have to demonstrate their fitness to be treated as equal with regard to the res publica. And then once accepted, they would be obliged to participate in public life. Providing evidence of wealth was historically crucial because it showed both management skills and a desire to pool one’s resources with an alien society. The duty to vote in elections – in which each vote counts as one — is simply a remnant of what had been a much stronger civic expectation to engage in society.

Many philosophers have believed that republicanism cannot be scaled up because they thought it was unreasonable to expect that people with quite diverse backgrounds and interests could treat each other as ‘equals’ in some politically sustainable sense. It’s quite clear that even the American Founding Fathers had their doubts, since they counted slaves as only 3/5 of a person for purposes of Congressional representation.

Notice that I haven’t yet mentioned ‘democracy’. That’s because democracy has historically meant ‘majority rule’, on whatever terms it’s established. For example, 51% could license the execution of the remaining 49% in a democracy. Indeed, people may start equal in a democracy but that equality could soon evaporate after the first collective decision is taken. Think Animal Farm and Lord of the Flies, two classic mid-20th century English novels.

Here one can begin to appreciate the abiding importance of the Abrahamic religions in upholding a metaphysical conception of human equality that cuts against what had been traditionally seen as the eventual descent of democracy into ‘mob rule’.

That metaphysical idea – the fundamental equality of all humans — was first made incarnate in the practice of debt forgiveness among the Jews on each sabbatical year. To cut a long story short, since I don’t want to bore you with religious history, the fundamental equality of people was ritualistically demonstrated by the redistribution of wealth from the ‘winners’ to the ‘losers’ in society, which in turn provided an opportunity for everyone to be ‘born again’: The rich as somewhat poorer and the poor as somewhat richer. Thus, society is periodically remade as a level playing field.

Until the losers are regarded as always the equals of the winners of society, democracy is not an especially egalitarian political movement. This helps to explain why such great defenders of liberalism as John Stuart Mill regarded democracy with considerable suspicion. He believed that given the chance, the great unwashed might permanently silence the enlightened few, who throughout history have often been on the losing side of many of society’s great arguments – especially on matters concerning the future.

In What Sense, Then, Are All People ‘Created Equal’?

I would like to propose that our equality is ultimately about possessing a wide degree of freedom. And I mean a freedom that gives you the right to be wrong and the right to fail. This is only possible if you’re allowed to express yourself in the first place — and be allowed a second chance. This is to do with the range of opportunities available to you.

It’s easy to see that someone with a track record of managing their own wealth successfully would already be in the business of allowing themselves second chances – say, when an investment sours – and so would be fit for republican citizenship. However, the ancient Jewish practice of debt atonement was the original policy to allow everyone to acquire that enviable status. It was always in the back of minds of those who designed the welfare state.

Every human is entitled to be free in how they dispose of their lives, regardless of their likelihood of success. Freedom is the capacity to take risks, and universities are for the development of that capacity. There is nothing natural about how people come to want what they want. It is all a matter of training, and the only question is where and how it happens. And you have come to Ashland for that.

If you graduate from Ashland with a clearer sense of purpose than when you entered, then this university will have done its job and you will be able to go forward as an exemplary human being. I say this as a matter of principle – regardless of what you take your purpose to be, and even if your sense of clarity arises from revolting what you have encountered here.

The bottom line is that if you can’t have a sense of purpose unless you have faced serious alternatives – that is to say, ‘opportunity costs’, as the economists like to put it.

You’re not free unless you have had the opportunity to reject alternatives presented to you. And in that respect, the value of your education amounts to increasing your capacity for rejection – you can afford to let go. And that means more than simply saying no because of what you have been taught, but rather saying yes because you can identify with a certain way of being in the world.

We live in a time when those of you before me can self-identify in a wider range of ways than ever before. When I was your age, all we had was class and national mobility at our disposal, but now you also have gender and even race mobility added on to it – at both a social and a biological level, in case one is worried by pedigree.

I am by no means suggesting that you need to think about any of these sorts of migrations, but they are there for the asking, and if you have been trained properly here, you will at least have heard of them and have adopted a reasoned response to them.

Whatever else one can say about humanity in the future, it is bound to be a moveable feast. And you will be among the movers and shakers!

Contact details: S.W.Fuller@warwick.ac.uk

[1] What follows is the commencement address to the Winter 2018 graduating class of Ashland University, Ohio. It was delivered on 15 December.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Jeroen Van Bouwel, Ghent University, Belgium, jeroen.vanbouwel@ugent.be; Michiel Van Oudheusden, SKC-CEN Belgian Nuclear Research Centre and University of Leuven, Belgium.

Van Bouwel, Jeroen and and Michiel Van Oudheusden. “Beyond Consensus? A Reply to Alan Irwin.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 48-53.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Pq

Please refer to:

Image from Alex Brown via Flickr

 

We are grateful to Alan Irwin for his constructive response, “Agreeing to Differ?” to our paper and, notwithstanding differences between his view and ours, we agree with many of his comments. In this short rejoinder, we zoom in on the three main issues Irwin raises. We also use this opportunity to highlight and further develop some of our ideas.

The three issues Irwin brings up are:

How to understand consensus? Rather than, or along with, a thin ‘Anglo-Saxon’ sense of consensus as mutual agreement, one could adopt a thick conception of consensus, implying “faith in the common good and commitment to building a shared culture” (Irwin 2017). The thick sense (as enacted in Danish culture) suggests that disagreement is an integral part of consensus. Therefore, we would do well to pay more attention to conflict handling and disagreement within consensus-oriented discourse.

Why are so many public participation activities consensus-driven? We should question the institutional and political contexts within which consensus-seeking arises and how these contexts urge us to turn away from conflict and disagreement. And, why do public participation activities persist at all, given all the criticism they receive from various sides?

Should we not value the art of closure, of finding ways to make agreements, particularly in view of the dire state of world politics today?

These are legitimate questions and concerns, and Irwin is right to point them out. However, we believe some of the concepts discussed in our paper are helpful in addressing them. Let us start with the first issue Irwin raises, which we will link to the concept of meta-consensus.

Meta-Consensus

It is indeed helpful to draw a distinction between the thinner Anglo-Saxon sense of consensus and the thicker sense of consensus as faith in the common good, as Irwin suggests. In the latter sense, disagreement and dissensus can be seen as part of the consensus. We fully agree with Irwin that consensus and dissensus should be thought together rather than presented only in terms of contradiction and opposition. This is why we analytically distinguish a (simple) consensus from a meta-consensus.

As we sketch in our article, at the simple level, we might encounter disagreement and value pluralism, whereas at the meta-level, the meta-consensus provides a common ground for participation by explicitly or implicitly laying out the rules of engagement, the collective ways to handle conflict, and how to close or disclose discussion. The meta-consensus also impinges on the scope of issues that is opened to discussion, who may or may not participate, the stopping rules, the structure of interaction, and the rationales and procedures that guide participation in general.

We have sought to put this meta-consensus center stage by comparing and contrasting how it is enacted in, or through, two participation cases (participatory TA and the NIH consensus conference). In this way, we seek to give due attention to the common ground that enables and constrains consensus and dissensus formation, and to different institutional designs impinging on participation, without insisting on the necessity of a simple consensus or the need for closure.

Drawing attention to the meta-consensus that governs participation may help to facilitate more reflexive modes of engagement that can be opened to joint discussion rather than imposed on participants. It should also help participants to better understand when and why they are in disagreement and determine courses of action when this is the case. As such, it may contribute to “building a shared culture” by facilitating and by establishing a shared adhesion to the principles of inclusion, mutual listening, and respect (cf. Horst and Irwin 2010; Irwin 2017). However, we believe it is equally important to emphasize that there is always the possibility of dissensus, irreconciliation, and further conflict.

As we see it, entertaining this possibility is an important prerequisite or condition for genuine participation, as it creates an open and contested space in which participants can think, and engage, as adversaries. Thus, we concur with Irwin that consensus and dissensus both have a place in public participatory exercises (and in the public sphere more generally). However, when we are faced with a choice between them (as with fundamental disputes, such as those over abortion or human enhancement), we must carefully consider how, whether, and why we seek (dis)agreement. This is not to argue against consensus-seeking, but to insist on the importance of constructing and sustaining an agonistic, contestable order within participation.

Different Democratic Models of Participation

Irwin appropriately proposes to reflect more on the institutional and political contexts in which participation is organized. The question why we aim for consensus in public participation activities, as well as the broader question of why public participation activities persist at all, do indeed deserve more attention. We have not addressed these questions in our paper, but we do think being more explicit about the aims of participation is an integral part of the approach that we are advocating. In order to discuss and choose among the different democratic models of participation (aggregative, deliberative, participatory, and agonistic), it is imperative that we understand their political, economic, and social purposes or roles and make these explicit.

Similarly, we may ask how the models serve different aims within specific institutional and political contexts. Here, the notion of political culture springs to mind, as in our region (Flanders) and country (Belgium), conflicts and divisions between groups are often managed through social concertation between trade unions, employers’ organizations, and governments. This collective bargaining approach both challenges and complements more participatory modes of decision making (Van Oudheusden et al. 2015). As mentioned earlier, we do not consider this issue in our paper but it is well worthy of further reflection and consideration.

Irwin also wonders whether policy makers might think our concepts and models of participation miss the point as many of them see it. It is an interesting question (we wonder whether Irwin has any particular cases in mind), but one thing we can do is to insist that there is no one-size-fits-all approach to participation. Different options are available, as each participation model has strengths and weaknesses. It seems important to us to attend to these strengths and weaknesses, as the models designate roles and responsibilities (e.g. by specifying who is included in participation and how), foresee how the collective should interact and indicate what kinds of results may ensue from participatory practice. By juxtaposing them, we get a better picture of how problems, contexts, and challenges are framed and handled differently within each participatory setting. As making trade-offs between approaches is at the heart of policymaking, we invite policy makers (and decision makers more broadly) to explore these settings with us, and carefully consider how they embed multiple social and techno-scientific values and orientations.

Disclosure

As Irwin rightly notes in his reply, we do not propose one final alternative to existing practice but entertain the possibility of mobilizing more than one model of democracy in participation. This implies that we also allow for a consensual approach when it is warranted. However, in developing ideals that contrast with consensus, we open onto disclosure and a more agonistic appraisal of participation, thereby abandoning the ideal, and appeal, of final closure. In response to this move, Irwin wonders whether we should not value the art of closure, especially in these times. While we agree on the dire state of world politics, we are not convinced that replacing closure by disclosure would aggravate the present situation. Perhaps the contrary is true. What if the quest for consensus brought us to this situation in the first place?

As the political theorist Chantal Mouffe argues, in a world of consensual politics (also characterized as neoliberal, de-politicized or post-political, or in Mouffe’s words as a “politics of the center”), many voters turn to populists to voice their dissatisfaction (Mouffe 2005: 228). Populists build on this dissatisfaction, publicly presenting themselves as the only real alternative to the status quo. Thus, consensual politics contributes to hardening the opposition between those who are in (the establishment) and those who are out (the outsiders). In this antagonistic relation, the insiders carry the blame for the present state of affairs.

This tension is exacerbated through the blurring of the boundaries between the political left and right, as conflicts can no longer be expressed through the traditional democratic channels hitherto provided by party politics. Thus, well-intentioned attempts by “Third Way” thinkers, among others, to transcend left/right oppositions eventually give rise to antagonism, with populists (and other outsiders) denouncing the search for common ground. Instead, these outsiders seek to conquer more ground, to annex or colonize, typically at the expense of others.

Whether one agrees with Mouffe’s analysis of recent political developments or not, it is instructive to consider her vision of radical, agonistic (rather than antagonistic) politics. Contrary to antagonists, agonistic pluralists do seek some form of common ground; albeit a contested one that is negotiated politically. In this way, agonists “domesticate” antagonism, so that opposing parties confront each other as adversaries who respect the right to differ, rather than as enemies who seek to obliterate one another. Thus, an agonistic democracy enables a confrontation among adversaries – for instance, among liberal-conservative, social-democratic, neo-liberal and radical-democratic factions. A common ground (or meta-consensus) is established between these adversaries through party identification around clearly differentiated positions and by offering citizens a choice between political alternatives.

To reiterate, antagonistic democracy is characterised by the lack of a shared contested symbolic space (in other words, a meta-consensus) and the lack of agonistic channels through which grievances can be legitimately expressed. This lack emerges when there is too much consensus and consensus-seeking, as is arguably now the case in many (but not all) Western democracies. We therefore need to be explicit about the many aspects and different possible democratic models of participation. Rather than emphasize the need for more consensus and for closure, we would do well to engage with the notions of dissensus and disclosure.

This, in our mind, seems to be a more fruitful venue to sort out various political problems in the long run than attaching to the ideal of consensus and consensus-seeking. Disclosure keeps the channels open. It is a form of opening joint discussion on the various models of participation, not with the aim of inciting endless debate but of making the most of them by reflectively probing their strengths and weaknesses in specific situations and contexts. Rather than aiming for closure beyond plurality, it urges us to articulate what is at stake, for whom, and why, and what types of learning emerge in and through participation. It should also increase our understanding of what “game” – participatory model – we are enacting.

Beyond Consensus?

At the end of his response, Irwin raises the very pertinent question as to whether we need more disclosure now around climate change. For Irwin, “certain consensual ideals seem more important” (Irwin 2017). There are many aspects to Irwin’s big question, but let us pick out a couple and start sketching an answer.

First, calling for a consensual approach or a consensus regarding climate change risks backfiring. A demand for consensus in science may lead to more doubt mongering (cf. Oreskes & Conway 2010), not so much because of disagreement among scientists, but due to external pressures from various lobby or pressure groups that gain from manufacturing controversy (e.g. industry players and environmental NGOs). A lack of scientific consensus (within a framework that emphasizes the importance of achieving a scientific consensus) might, in turn, be used by politicians to undercut or criticize science or policies based on scientific evidence and consensus. Even the slightest doubt about a claimed consensus may erode public trust in climate science and scientists, as was the case in 2009 with Climategate.

Second, the demand for consensus in science might also set too high expectations for scientists (neglecting constraints on all sides, such as lack of time, scientific pluralism, and so on) and suggest that dissent in science is a marker of science failing to deliver.

Third, granting too much importance to scientific consensus risks silencing legitimate dissent (e.g. controversial alternative theories), whereas dissent and controversies also drive science and innovation. (There are, as we all know, many important scientific discoveries, paradigms, and theories that were for a long time ignored or suppressed because they went against the prevailing consensus.)

We are thus led to say that seeking a consensus on climate change does not result in effective policies and policymaking. Taking to heart Irwin’s plea “to imagine the kinds of closure which might be fruitfully established,” we think it is important to ask if closure here necessarily unfolds with consensus seeking, and if so, how consensus is best understood. Finding ways to break the antagonism invoked by a (depoliticized) scientific consensus on climate change may ultimately be more fruitful to forge long-term durable solutions among particular groups of actors, something that might be done by publicly disclosing the divergent agendas, stakes, and power mechanisms at play in “climate change.” (A scientific consensus does not tell us what to do about climate change anyway.)

Seen in this way, and again drawing on Mouffe, an agonistic constellation might have to be put in place, where disclosure challenges, or even breaks, the sterile opposition between outsiders and insiders. This is because disclosure requires that insiders clearly distinguish and differentiate their policies from one another, which urges them to develop real alternatives to existing problems. Ideally, these alternatives would embed a diversity of values around climate change and engender solutions that make use of the best available science without threatening a group’s core values (cf. Bolsen, Druckman & Cook 2015).

To give some quick examples, a first example could center on reducing the amount of carbon dioxide and other greenhouse gases by adjusting consumption patterns; a second could insist on private enterprise-driven geo-engineering to mitigate global warming and its effects (e.g. technology to deflect heat away from the earth’s surface); a third alternative on making cash from carbon by emission-trading systems; a fourth on moving to Mars, etc.

Whichever political options are decided on, we again emphasize the importance of questioning the rationales and processes of consensus-seeking, which to our mind, are too often taken for granted. Creating a more agonistic setting might change the current stalemate around climate change (and related wicked problems), by re-imagining the relationships between insider and outsider groups, by insisting that different alternatives are articulated and heard, and by publicly disclosing the divergent agendas, stakes, and power mechanisms in the construction of problems and their solutions.

Conclusion

Thanks in large part to Alan Irwin’s thoughtful and carefully written response to our article, we are led to reflect on, and develop, the concepts of meta-consensus, disclosure, and democratic models of participation. We are also led to question the ideals of consensus and dissensus, as well as the processes that drive and sustain them, and to find meaningful and productive ways to disclose our similarities and differences. By highlighting different models of democracy and how these models are enacted in participation, we want to encourage reflection upon the different implications of participatory consensus-seeking. We hope our article and our conversation with Irwin facilitates further reflection of this kind, to the benefit of participation scholars, practitioners, and decision makers.

References

Bolsen, Toby; James Druckman, and Fay Lomax Cook. “Citizens’, Scientists’ and Policy Advisors’ Beliefs about Global Warming.” Annals of the AAPSS 658 (2015): 271-295.

Horst, Maja; and Alan Irwin. “Nations at Ease with Radical Knowledge: on Consensus, Consensusing and False Consensusness.” Social Studies of Science 40, no. 1 (2010): 105-126.

Irwin, Alan. 2017. “Agreeing to Differ? A Response to Van Bouwel and Van Oudheusden.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 11-14.

Oreskes, Naomi and Erik Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. London: Bloomsbury Press.

Mouffe, Chantal. “The Limits of John Rawls’ Pluralism.” Politics, Philosophy and Economics 4, no. 2 (2005): 221-31.

Van Bouwel, Jeroen and Michiel Van Oudheusden. 2017. “Participation Beyond Consensus? Technology Assessments, Consensus Conferences and Democratic Modulation.” Social Epistemology 31(6): 497-513.

Van Oudheusden, Michiel, Charlier, Nathan, Rosskamp, Benedikt & Pierre Delvenne. 2015. “Broadening, Deepening, and Governing Innovation: Flemish Technology Assessment in Historical and Socio-Political Perspective.” Research Policy 44(10): 1877-1886.”

“Knowledge of Climates and Climates of Knowledge”, Amanda Machin, Zeppelin University

cv_image

Image credit: Chris Cheung (Ping Foo), via flickr

The changing climate has attracted attention from numerous fields and disciplines. Part of its intrigue lies in the impossibility of boxing it into one area of knowledge and treating it with conventional methods. The entangled complex of issues that comprises climate change has disrupted and to some extent transfigured traditional linear conceptions of the connection between science and society. Queries regarding what expertise consists of, how it is communicated and the ways in which it might be incorporated into democratic processes have found no easy answers. What these questions have done is to undermine the simplistic assumption that scientists can straightforwardly impart instructions regarding not only what should be done, but also regarding what can be done to mitigate and alleviate massive environmental upheaval. Please read more …

Author Information: Adam Riggio, McMaster University, adamriggio@gmail.com; Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-22f

Editor’s Note:

Adam Riggio

I‘d like to talk with you about two things. One is to ask you a practical political question, and the second is to have a wider discussion about how philosophy of science and scientific practice influence each other. I’ll start with the practical political question first, because one of the first lessons in writing for the web is to headline your most sensationalistic point.  Continue Reading…