Archives For Brian Martin

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Bad Social Science.” Social Epistemology Review and Reply Collective 8, no. 3 (2019): 6-16.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-47a

Image by Sanofi Pasteur via Flickr / Creative Commons

 

People untrained in social science frameworks and methods often make assumptions, observations or conclusions about the social world.[1] For example, they might say, “President Trump is a psychopath,” thereby making a judgement about Trump’s mental state. The point here is not whether this judgement is right or wrong, but whether it is based on a careful study of Trump’s thoughts and behaviour drawing on relevant expertise.

In most cases, the claim “President Trump is a psychopath” is bad psychology, in the sense that it is a conclusion reached without the application of skills in psychological diagnosis expected among professional psychologists and psychiatrists.[2] Even a non-psychologist can recognise cruder forms of bad psychology: they lack the application of standard tools in the field, such as comparison of criteria for psychopathy with Trump’s thought and behaviour.

“Bad social science” here refers to claims about society and social relationships that fall very far short of what social scientists consider good scholarship. This might be due to using false or misleading evidence, making faulty arguments, drawing unsupported conclusions or various other severe methodological, empirical or theoretical deficiencies.

In all sorts of public commentary and private conversations, examples of bad social science are legion. Instances are so common that it may seem pointless to take note of problems with ill-informed claims. However, there is value in a more systematic examination of different sorts of everyday bad social science. Such an examination can point to what is important in doing good social science and to weaknesses in assumptions, evidence and argumentation. It can also provide insights into how to defend and promote high-quality social analysis.

Here, I illustrate several facets of bad social science found in a specific public scientific controversy: the Australian vaccination debate. It is a public debate in which many partisans make claims about social dynamics, so there is ample material for analysis. In addition, because the debate is highly polarised, involves strong emotions and is extremely rancorous, it is to be expected that many deviations from calm, rational, polite discourse would be on display.

Another reason for selecting this topic is that I have been studying the debate for quite a number of years, and indeed have been drawn into the debate as a “captive of controversy.”[3] Several of the types of bad social science are found on both sides of the debate. Here, I focus mainly on pro-vaccination campaigners for reasons that will become clear.

In the following sections, I address several facets of bad social science: ad hominem attacks, not defining terms, use of limited and dubious evidence, misrepresentation, lack of reference to alternative viewpoints, lack of quality control, and drawing of unjustified conclusions. In each case, I provide examples from the Australian public vaccination debate, drawing on my experience. In a sense, selecting these topics represents an informal application of grounded theory: each of the shortcomings became evident to me through encountering numerous instances. After this, I note that there is a greater risk of deficient argumentation when defending orthodoxy.

With this background, I outline how studying bad social science can be of benefit in three ways: as a pointer to particular areas in which it is important to maintain high standards, as a toolkit for responding to attacks on social science, and as a reminder of the need to improve public understanding of social science approaches.

Ad Hominem

In the Australian vaccination debate, many partisans make adverse comments about opponents as a means of discrediting them. Social scientists recognise that ad hominem argumentation, namely attacking the person rather than dealing with what they say, is illegitimate for the purposes of making a case.

In the mid 1990s, Meryl Dorey founded the Australian Vaccination Network (AVN), which became the leading citizens’ group critical of government vaccination policy.[4] In 2009, a pro-vaccination citizens’ group called Stop the Australian Vaccination Network (SAVN) was set up with the stated aim of discrediting and shutting down the AVN.[5] SAVNers referred to Dorey with a wide range of epithets, for example “cunt.”[6]

What is interesting here is that some ad hominem attacks contain an implicit social analysis. One of them is “liar.” SAVNer Ken McLeod accused Dorey of being a liar, giving various examples.[7] However, some of these examples show only that Dorey persisted in making claims that SAVNers believed had been refuted.[8] This does not necessarily constitute lying, if lying is defined, as it often is by researchers in the area, as consciously intending to deceive.[9] To the extent that McLeod failed to relate his claims to research in the field, his application of the label “liar” constitutes bad social science.

Another term applied to vaccine critics is “babykiller.” In the Australian context, this word contains an implied social analysis, based on these premises: public questioning of vaccination policy causes some parents not to have their children vaccinated, leading to reduced vaccination rates and thence to more children dying of infectious diseases.

“Babykiller” also contains a moral judgement, namely that public critics of vaccination are culpable for the deaths of children from vaccination-preventable diseases. Few of those applying the term “babykiller” provide evidence to back up the implicit social analysis and judgement, so the label in these instances represents bad social science.

There are numerous other examples of ad hominem in the vaccination debate, on both sides. Some of them might be said to be primarily abuse, such as “cunt.” Others, though, contain an associated or implied social analysis, so to judge its quality it is necessary to assess whether the analysis conforms to conventions within social science.

Undefined terms

In social science, it is normal to define key concepts, either by explicit definitions or descriptive accounts. The point is to provide clarity when the concept is used.

One of the terms used by vaccination supporters in the Australian debate is “anti-vaxxer.” Despite the ubiquity of this term in social and mass media, I have never seen it defined. This is significant because of the considerable ambiguity involved. “Anti-vaxxer” might refer to parents who refuse all vaccines for their children and themselves, parents who have their children receive some but not all recommended vaccines, parents who express reservations about vaccination, and/or campaigners who criticise vaccination policy.

The way “anti-vaxxer” is applied in practice tends to conflate these different meanings, with the implication that any criticism of vaccination puts you in the camp of those who refuse all vaccines. The label “anti-vaxxer” has been applied to me even though I do not have a strong view about vaccination.[10]

Because of the lack of a definition or clear meaning, the term “anti-vaxxer” is a form of ad hominem and also represents bad social science. Tellingly, few social scientists studying the vaccination issue use the term descriptively.

In their publications, social scientists may not define all the terms they use because their meanings are commonly accepted in the field. Nearly always, though, some researchers pay close attention to any widely used concept.[11] When such a concept remains ill-defined, this may be a sign of bad social science — especially when it is used as a pejorative label.

Limited and Dubious Evidence

Social scientists normally seek to provide strong evidence for their claims and restrict their claims to what the evidence can support. In public debates, this caution is often disregarded.

After SAVN was formed in 2009, one of its initial claims was that the AVN believed in a global conspiracy to implant mind-control chips via vaccinations. The key piece of evidence SAVNers provided to support this claim was that Meryl Dorey had given a link to the website of David Icke, who was known to have some weird beliefs, such as that the world is ruled by shape-shifting reptilian humanoids.

The weakness of this evidence should be apparent. Just because Icke has some weird beliefs does not mean every document on his website involves adherence to weird beliefs, and just because Dorey provided a link to a document does not prove she believes in everything in the document, much less subscribes to the beliefs of the owner of the website. Furthermore, Dorey denied believing in a mind-control global conspiracy.

Finally, even if Dorey had believed in this conspiracy, this does not mean other members of the AVN, or the AVN as an organisation, believed in the conspiracy. Although the evidence was exceedingly weak, several SAVNers, after I confronted them on the matter, initially refused to back down from their claims.[12]

Misrepresentation

When studying an issue, scholars assume that evidence, sources and other material should be represented fairly. For example, a quotation from an author should fairly present the author’s views, and not be used out of context to show something different than what the author intended.

Quite a few campaigners in the Australian vaccination debate use a different approach, which might be called “gotcha”. Quotes are used to expose writers as incompetent, misguided or deluded. Views of authors are misrepresented as a means of discrediting and dismissing them.

Judy Wilyman did her PhD under my supervision and was the subject of attack for years before she graduated. On 13 January 2016, just two days after her thesis was posted online, it was the subject of a front-page story in the daily newspaper The Australian. The journalist, despite having been informed of a convenient summary of the thesis, did not mention any of its key ideas, instead claiming that it involved a conspiracy theory. Quotes from the thesis, taken out of context, were paraded as evidence of inadequacy.

This journalistic misrepresentation of Judy’s thesis was remarkably influential. It led to a cascade of hostile commentary, with hundreds of online comments on the numerous stories in The Australian, an online petition signed by thousands of people, and calls by scientists for Judy’s PhD to be revoked. In all the furore, not a single critic of her thesis posted a fair-minded summary of its contents.[13]

Alternative Viewpoints?

In high-quality social science, it is common to defend a viewpoint, but considered appropriate to examine other perspectives. Indeed, when presenting a critique, it is usual to begin with a summary of the work to be criticised.

In the Australian vaccination debate, partisans do not even attempt to present the opposing side’s viewpoint. I have never seen any campaigner provide a summary of the evidence and arguments supporting the opposition’s viewpoint. Vaccination critics present evidence and arguments that cast doubt on the government’s vaccination policy, and never try to summarise the evidence and arguments supporting it. Likewise, backers of the government’s policy never try to summarise the case against it.

There are also some intermediate viewpoints, divergent from the entrenched positions in the public debate. For example, there are some commentators who support some vaccines but not all the government-recommended ones, or who support single vaccines rather than multiple vaccines. These non-standard positions are hardly ever discussed in public by pro-vaccination campaigners.[14] More commonly, they are implicitly subsumed by the label “anti-vaxxer.”

To find summaries of arguments and evidence on both sides, it is necessary to turn to work by social scientists, and then only the few of them studying the debate without arguing for one side or the other.[15]

Quality Control

When making a claim, it makes sense to check it. Social scientists commonly do this by checking sources and/or by relying on peer review. For contemporary issues, it’s often possible to check with the person who made the claim.

In the Australian vaccination debate, there seems to be little attempt to check claims, especially when they are derogatory claims about opponents. I can speak from personal experience. Quite a number of SAVNers have made comments about my work, for example in blogs. On not a single occasion has any one of them checked with me in advance of publication.

After SAVN was formed and I started writing about free speech in the Australian vaccination debate, I sent drafts of some of my papers to SAVNers for comment. Rather than using this opportunity to send me corrections and comments, the response was to attack me, including by making complaints to my university.[16] Interestingly, the only SAVNer to have been helpful in commenting on drafts is another academic.

Another example concerns Andrew Wakefield, a gastroenterologist who was lead author of a paper in The Lancet suggesting that the possibility that the MMR triple vaccine (measles, mumps and rubella) might be linked to autism should be investigated. The paper led to a storm of media attention.

Australian pro-vaccination campaigns, and quite a few media reports, refer to Wakefield’s alleged wrongdoings, treating them as discrediting any criticism of vaccination. Incorrect statements about Wakefield are commonplace, for example that he lost his medical licence due to scientific fraud. It is a simple matter to check the facts, but apparently few do this. Even fewer take the trouble to look into the claims and counterclaims about Wakefield and qualify their statements accordingly.[17]

Drawing Conclusions

Social scientists are trained to be cautious in drawing conclusions, ensuring that they do not go beyond what can be justified from data and arguments. In addition, it is standard to include a discussion of limitations. This sort of caution is often absent in public debates.

SAVNers have claimed great success in their campaign against the AVN, giving evidence that, for example, their efforts have prevented AVN talks from being held and reduced media coverage of vaccine critics. However, although AVN operations have undoubtedly been hampered, this does not necessarily show that vaccination rates have increased or, more importantly, that public health has benefited.[18]

Defending Orthodoxy

Many social scientists undertake research in controversial areas. Some support the dominant views, some support an unorthodox position and quite a few try not to take a stand. There is no inherent problem in supporting the orthodox position, but doing so brings greater risks to the quality of research.

Many SAVNers assume that vaccination is a scientific issue and that only people with scientific credentials, for example degrees or publications in virology or epidemiology, have any credibility. This was apparent in an article by philosopher Patrick Stokes entitled “No, you’re not entitled to your opinion” that received high praise from SAVNers.[19] It was also apparent in the attack on Judy Wilyman, whose PhD was criticised because it was not in a scientific field, and because she analysed scientific claims without being a scientist. The claim that only scientists can validly criticise vaccination is easily countered.[20] The problem for SAVNers is that they are less likely to question assumptions precisely because they support the dominant viewpoint.

There is a fascinating aspect to campaigners supporting orthodoxy: they themselves frequently make claims about vaccination although they are not scientists with relevant qualifications. They do not apply their own strictures about necessary expertise to themselves. This can be explained as deriving from “honour by association,” a process parallel to guilt by association but less noticed because it is so common. In honour by association, a person gains or assumes greater credibility by being associated with a prestigious person, group or view.

Someone without special expertise who asserts a claim that supports orthodoxy implicitly takes on the mantle of the experts on the side of orthodoxy. It is only those who challenge orthodoxy who are expected to have relevant credentials. There is nothing inherently wrong with supporting the orthodox view, but it does mean there is less pressure to examine assumptions.

My initial example of bad social science was calling Donald Trump a psychopath. Suppose you said Trump has narcissistic personality disorder. This might not seem to be bad social science because it accords with the views of many psychologists. However, agreeing with orthodoxy, without accompanying deployment of expertise, does not constitute good social science any more than disagreeing with orthodoxy.

Lessons

It is all too easy to identify examples of bad social science in popular commentary. They are commonplace in political campaigning and in everyday conversations.

Being attuned to common violations of good practice has three potential benefits: as a useful reminder to maintain high standards; as a toolkit for responding to attacks on social science; and as a guide to encouraging greater public awareness of social scientific thinking and methods.

Bad Social Science as a Reminder to Maintain High Standards

Most of the kinds of bad social science prevalent in the Australian vaccination debate seldom receive extended attention in the social science literature. For example, the widely used and cited textbook Social Research Methods does not even mention ad hominem, presumably because avoiding it is so basic that it need not be discussed.

It describes five common errors in everyday thinking that social scientists should avoid: overgeneralisation, selective observation, premature closure, the halo effect and false consensus.[21] Some of these overlap with the shortcomings I’ve observed in the Australian vaccination debate. For example, the halo effect, in which prestigious sources are given more credibility, has affinities with honour by association.

The textbook The Craft of Research likewise does not mention ad hominem. In a final brief section on the ethics of research, there are a couple of points that can be applied to the vaccination debate. For example, ethical researchers “do not caricature or distort opposing views.” Another recommendation is that “When you acknowledge your readers’ alternative views, including their strongest objections and reservations,” you move towards more reliable knowledge and honour readers’ dignity.[22] Compared with the careful exposition of research methods in this and other texts, the shortcomings in public debates are seemingly so basic and obvious as to not warrant extended discussion.

No doubt many social scientists could point to the work of others in the field — or even their own — as failing to meet the highest standards. Looking at examples of bad social science can provide a reminder of what to avoid. For example, being aware of ad hominem argumentation can help in avoiding subtle denigration of authors and instead focusing entirely on their evidence and arguments. Being reminded of confirmation bias can encourage exploration of a greater diversity of viewpoints.

Malcolm Wright and Scott Armstrong examined 50 articles that cited a method in survey-based research that Armstrong had developed years earlier. They discovered that only one of the 50 studies had reported the method correctly. They recommend that researchers send drafts of their work to authors of cited studies — especially those on which the research depends most heavily — to ensure accuracy.[23] This is not a common practice in any field of scholarship but is worth considering in the interests of improving quality.

Bad Social Science as a Toolkit for Responding to Attacks

Alan Sokal wrote an intentionally incoherent article that was published in 1996 in the cultural studies journal Social Text. Numerous commentators lauded Sokal for carrying out an audacious prank that revealed the truth about cultural studies, namely that it was bunk. These commentators had not carried out relevant studies themselves, nor were most of them familiar with the field of cultural studies, including its frameworks, objects of study, methods of analysis, conclusions and exemplary pieces of scholarship.

To the extent that these commentators were uninformed about cultural studies yet willing to praise Sokal for his hoax, they were involved in a sort of bad social science. Perhaps they supported Sokal’s hoax because it agreed with their preconceived ideas, though investigation would be needed to assess this hypothesis.

Most responses to the hoax took a defensive line, for example arguing that Sokal’s conclusions were not justified. Only a few argued that interpreting the hoax as showing the vacuity of cultural studies was itself poor social science.[24] Sokal himself said it was inappropriate to draw general conclusions about cultural studies from the hoax,[25] so ironically it would have been possible to respond to attackers by quoting Sokal.

When social scientists come under attack, it can be useful to examine the evidence and methods used or cited by the attackers, and to point out, as is often the case, that they fail to measure up to standards in the field.

Encouraging Greater Public Awareness of Social Science Thinking and Methods

It is easy to communicate with like-minded scholars and commiserate about the ignorance of those who misunderstand or wilfully misrepresent social science. More challenging is to pay close attention to the characteristic ways in which people make assumptions and reason about the social world and how these ways often fall far short of the standards expected in scholarly circles.

By identifying common forms of bad social science, it may be possible to better design interventions into public discourse to encourage more rigorous thinking about evidence and argument, especially to counter spurious and ill-founded claims by partisans in public debates.

Conclusion

Social scientists, in looking at research contributions, usually focus on what is high quality: the deepest insights, the tightest arguments, the most comprehensive data, the most sophisticated analysis and the most elegant writing. This makes sense: top quality contributions offer worthwhile models to learn from and emulate.

Nevertheless, there is also a role for learning from poor quality contributions. It is instructive to look at public debates involving social issues in which people make judgements about the same sorts of matters that are investigated by social scientists, everything from criminal justice to social mores. Contributions to public debates can starkly show flaws in reasoning and the use of evidence. These flaws provide a useful reminder of things to avoid.

Observation of the Australian vaccination debate reveals several types of bad social science, including ad hominem attacks, failing to define terms, relying on dubious sources, failing to provide context, and not checking claims. The risk of succumbing to these shortcomings seems to be magnified when the orthodox viewpoint is being supported, because it is assumed to be correct and there is less likelihood of being held accountable by opponents.

There is something additional that social scientists can learn by studying contributions to public debates that have serious empirical and theoretical shortcomings. There are likely to be characteristic failures that occur repeatedly. These offer supplementary guidance for what to avoid. They also provide insight into what sort of training, for aspiring social scientists, is useful for moving from unreflective arguments to careful research.

There is also a challenge that few scholars have tackled. Given the prevalence of bad social science in many public debates, is it possible to intervene in these debates in a way that fosters greater appreciation for what is involved in good quality scholarship, and encourages campaigners to aspire to make sounder contributions?

Contact details: bmartin@uow.edu.au

References

Blume, Stuart. Immunization: How Vaccines Became Controversial. London: Reaktion Books, 2017.

Booth, Wayne C.; Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition. Chicago: University of Chicago Press, 2016.

Collier, David; Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

Ekman, Paul. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. New York: Norton, 1985.

Hilgartner, Stephen, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522.

Lee, Bandy X. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President. New York: St. Martin’s Press, 2017.

Martin, Brian; and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

Martin, Brian. “Debating vaccination: understanding the attack on the Australian Vaccination Network.” Living Wisdom, no. 8, 2011, pp. 14–40.

Martin, Brian. “On the suppression of vaccination dissent.” Science & Engineering Ethics. Vol. 21, No. 1, 2015, pp. 143–157.

Martin, Brian. Evidence-based campaigning. Archives of Public Health, 76, no. 54. (2018), https://doi.org/10.1186/s13690-018-0302-4.

Martin, Brian. Vaccination Panic in Australia. Sparsnäs, Sweden: Irene Publishing, 2018.

Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010, http://www.scribd.com/doc/47704677/Meryl-Doreys-Trouble-With-the-Truth-Part-1.

Neuman, W. Lawrence. Social Research Methods: Qualitative and Quantitative Approaches, seventh edition. Boston, MA: Pearson, 2011.

Scott, Pam; Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

Sokal, Alan D. “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22

Stokes, Patrick. “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012, https://theconversation.com/no-youre-not-entitled-to-your-opinion-9978.

Wright, Malcolm, and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008.

[1] Thanks to Meryl Dorey, Stephen Hilgartner, Larry Neuman, Alan Sokal and Malcolm Wright for valuable feedback on drafts.

[2] For informed commentary on these issues, see Bandy X. Lee, The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President (New York: St. Martin’s Press, 2017).

[3] Pam Scott, Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

[4] The AVN, forced to change its name in 2014, became the Australian Vaccination-skeptics Network. In 2018 it voluntarily changed its name to the Australian Vaccination-risks Network.

[5] In 2014, SAVN changed its name to Stop the Australian (Anti-)Vaccination Network.

[6] Brian Martin and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

[7] Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010, http://www.scribd.com/doc/47704677/Meryl-Doreys-Trouble-With-the-Truth-Part-1.

[8] Brian Martin, “Debating vaccination: understanding the attack on the Australian Vaccination Network,” Living Wisdom, no. 8, 2011, pp. 14–40, at pp. 28–30.

[9] E.g., Paul Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (New York: Norton, 1985).

[10] On Wikipedia I am categorised as an “anti-vaccination activist,” a term that is not defined on the entry listing those in the category. See Brian Martin, “Persistent bias on Wikipedia: methods and responses,” Social Science Computer Review, Vol. 36, No. 3, June 2018, pp. 379–388.

[11] See for example David Collier, Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

[12] Brian Martin. “Caught in the vaccination wars (part 3)”, 23 October 2012, http://www.bmartin.cc/pubs/12hpi-comments.html.

[13] The only possible exception to this statement is Michael Brull, “Anti-vaccination cranks versus academic freedom,” New Matilda, 7 February 2016, who reproduced my own summary of the key points in the thesis relevant to Australian government vaccination policy. For my responses to the attack, see http://www.bmartin.cc/pubs/controversy.html – Wilyman, for example “Defending university integrity,” International Journal for Educational Integrity, Vol. 13, No. 1, 2017, pp. 1–14.

[14] Brian Martin, Vaccination Panic in Australia (Sparsnäs, Sweden: Irene Publishing, 2018), pp. 15–24.

[15] E.g., Stuart Blume, Immunization: How Vaccines Became Controversial (London: Reaktion Books, 2017).

[16] Brian Martin. “Caught in the vaccination wars”, 28 April 2011, http://www.bmartin.cc/pubs/11savn/.

[17] For own commentary on Wakefield, see “On the suppression of vaccination dissent,” Science & Engineering Ethics, Vol. 21, No. 1, 2015, pp. 143–157.

[18] Brian Martin. Evidence-based campaigning. Archives of Public Health, Vol. 76, article 54, 2018, https://doi.org/10.1186/s13690-018-0302-4.

[19] Patrick Stokes, “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012, https://theconversation.com/no-youre-not-entitled-to-your-opinion-9978.

[20] Martin, Vaccination Panic in Australia, 292–304.

[21] W. Lawrence Neuman, Social Research Methods: Qualitative and Quantitative Approaches, seventh edition (Boston, MA: Pearson, 2011), 3–5.

[22] Wayne C. Booth, Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition (Chicago: University of Chicago Press, 2016), 272–273.

[23] Malcolm Wright and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008, 125–132.

[24] For a detailed articulation of this approach, see Stephen Hilgartner, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522. Hilgartner gives numerous citations to expansive interpretations of the significance of the hoax.

[25] See for example Alan D. Sokal, “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22, at p. 11: “From the mere fact of publication of my parody, I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or the cultural studies of science — much less the sociology of science — is nonsense.”

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Technology and Evil.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 1-14.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-466

A Russian Mil Mi-28 attack helicopter.
Image by Dmitri Terekhov via Flickr / Creative Commons

 

Humans cause immense damage to each other and to the environment. Steven James Bartlett argues that humans have an inbuilt pathology that leads to violence and ecosystem destruction that can be called evil, in a clinical rather than a religious sense. Given that technologies are human constructions, it follows that technologies can embody the same pathologies as humans. An important implication of Bartlett’s ideas is that studies of technology should be normative in opposing destructive technologies.

Introduction

Humans, individually and collectively, do a lot of terrible things to each other and to the environment. Some obvious examples are murder, torture, war, genocide and massive environmental destruction. From the perspective of an ecologist from another solar system, humans are the world’s major pestilence, spreading everywhere, enslaving and experimenting on a few species for their own ends, causing extinctions of numerous other species and destroying the environment that supports them all.

These thoughts suggest that humans, as a species, have been causing some serious problems. Of course there are many individuals and groups trying to make the world a better place, for example campaigning against war and environmental degradation, and fostering harmony and sustainability. But is it possible that by focusing on what needs to be done and on the positives in human nature, the seriousness of the dark side of human behaviour is being neglected?

Here, I address these issues by looking at studies of human evil, with a focus on a book by Steven Bartlett. With this foundation, it is possible to look at technology with a new awareness of its deep problems. This will not provide easy solutions but may give a better appreciation of the task ahead.

Background

For decades, I have been studying war, ways to challenge war, and alternatives to military systems (e.g. Martin, 1984). My special interest has been in nonviolent action as a means for addressing social problems. Along the way, this led me to read about genocide and other forms of violence. Some writing in the area refers to evil, addressed from a secular, scientific and non-moralistic perspective.

Roy Baumeister (1997), a prominent psychologist, wrote a book titled Evil: Inside Human Violence and Cruelty, that I found highly insightful. Studying the psychology of perpetrators, ranging from murderers and terrorists to killers in genocide, Baumeister concluded that most commonly they feel justified in their actions and see themselves as victims. Often they think what they’ve done is not that important. Baumeister’s sophisticated analysis aims to counter the popular perception of evil-doers as malevolent or uncaring.

Baumeister is one of a number of psychologists willing to talk about good and evil. If the word evil feels uncomfortable, then substitute “violence and cruelty,” as in the subtitle of Baumeister’s book, and the meaning is much the same. It’s also possible to approach evil from the viewpoint of brain function, as in Simon Baron-Cohen’s (2011) The Science of Evil: On Empathy and the Origins of Cruelty. There are also studies that combine psychiatric and religious perspectives, such as M. Scott Peck’s (1988) People of the Lie: The Hope for Healing Human Evil.

Another part of my background is technology studies, including being involved in the nuclear power debate, studying technological vulnerability, communication technology, and technology and euthanasia, among other topics. I married my interests in nonviolence and in technology by studying how technology could be designed and used for nonviolent struggle (Martin, 2001).

It was with this background that I encountered Steven James Bartlett’s (2005) massive book The Pathology of Man: A Study of Human Evil. Many of the issues it addresses, for example genocide and war, were familiar to me, but his perspective offered new and disturbing insights. The Pathology of Man is more in-depth and far-reaching than other studies I had encountered, and is worth bringing to wider attention.

Here, I offer an abbreviated account of Bartlett’s analysis of human evil. Then I spell out ways of applying his ideas to technology and conclude with some possible implications.

Bartlett on Evil

Steven James Bartlett is a philosopher and psychologist who for decades studied problems in human thinking. The Pathology of Man was published in 2005 but received little attention. This may partly be due to the challenge of reading an erudite 200,000-word treatise but also partly due to people being resistant to Bartlett’s message, for the very reasons expounded in his book.

In reviewing the history of disease theories, Bartlett points out that in previous eras a wide range of conditions were considered to be diseases, ranging from “Negro consumption” to anti-Semitism. This observation is part of his assessment of various conceptions of disease, relying on standard views about what counts as disease, while emphasising that judgements made are always relative to a framework that is value-laden.

This is a sample portion of Bartlett’s carefully laid out chain of logic and evidence for making a case that the human species is pathological, namely characteristic of a disease. In making this case, he is not speaking metaphorically but clinically. The fact that the human species has seldom been seen as pathological is due to humans adopting a framework that exempts themselves from this diagnosis, which would be embarrassing to accept, at least for those inclined to think of humans as the apotheosis of evolution.

Next stop: the concept of evil. Bartlett examines a wide range of perspectives, noting that most of them are religious in origin. In contrast, he prefers a more scientific view: “Human evil, in the restricted and specific sense in which I will use it, refers to apparently voluntary destructive behavior and attitudes that result in the general negation of health, happiness, and ultimately of life.” (p. 65) In referring to “general negation,” Bartlett is not thinking of a poor diet or personal nastiness but of bigger matters such as war, genocide and overpopulation.

Bartlett is especially interested in the psychology of evil, and canvasses the ideas of classic thinkers who have addressed this issue, including Sigmund Freud, Carl Jung, Karl Menninger, Erich Fromm and Scott Peck. This detailed survey has only a limited return: these leading thinkers have little to say about the origins of evil and what psychological needs it may serve.

So Bartlett turns to other angles, including Lewis Fry Richardson’s classic work quantifying evidence of human violence, and research on aggression by ethologists, notably Konrad Lorenz. Some insights come from this examination, including Richardson’s goal of examining human destructiveness without emotionality and Lorenz’s point that humans, unlike most other animals, have no inbuilt barriers to killing members of their own species.

Bartlett on the Psychology of Genocide

To stare the potential for human evil in the face, Bartlett undertakes a thorough assessment of evidence about genocide, seeking to find the psychological underpinning of systematic mass killings of other humans. He notes one important factor, a factor not widely discussed or even admitted: many humans gain pleasure from killing others. Two other relevant psychological processes are projection and splitting. Projection involves denying negative elements of oneself and attributing them to others, for example seeing others as dangerous, thereby providing a reason for attacking them: one’s own aggression is attributed to others.

Splitting involves dividing one’s own grandiose self-conception from the way others are thought of. “By belonging to the herd, the individual gains an inflated sense of power, emotional support, and connection. With the feeling of group-exaggerated power and puffed up personal importance comes a new awareness of one’s own identity, which is projected into the individual’s conception” of the individual’s favoured group (p. 157). As a member of a group, there are several factors that enable genocide: stereotyping, dehumanisation, euphemistic language and psychic numbing.

To provide a more vivid picture of the capacity for human evil, Bartlett examines the Holocaust, noting that it was not the only or most deadly genocide but one, partly due to extensive documentation, that provides plenty of evidence of the psychology of mass killing.

Anti-Semitism was not the preserve of the Nazis, but existed for centuries in numerous parts of the world, and indeed continues today. The long history of persistent anti-Semitism is, according to Bartlett, evidence that humans need to feel prejudice and to persecute others. But at this point there is an uncomfortable finding: most people who are anti-Semitic are psychologically normal, suggesting the possibility that what is normal can be pathological. This key point recurs in Bartlett’s forensic examination.

Prejudice and persecution do not usually bring sadness and remorse to the victimizers, but rather a sense of strengthened identity, pleasure, self-satisfaction, superiority, and power. Prejudice and persecution are Siamese twins: Together they generate a heightened and invigorated belief in the victimizers’ supremacy. The fact that prejudice and persecution benefit bigots and persecutors is often overlooked or denied. (p. 167)

Bartlett examines evidence about the psychology of several groups involved in the Holocaust: Nazi leaders, Nazi doctors, bystanders, refusers and resisters. Nazi leaders and doctors were, for the most part, normal and well-adjusted men (nearly all were men). Most of the leaders were above average intelligence, and some had very high IQs, and many of them were well educated and culturally sophisticated. Cognitively they were superior, but their moral intelligence was low.

Bystanders tend to do nothing due to conformity, lack of empathy and low moral sensibility. Most Germans were bystanders to Nazi atrocities, not participating but doing nothing to oppose them.

Next are refusers, those who declined to be involved in atrocities. Contrary to usual assumptions, in Nazi Germany there were few penalties for refusing to join killings; it was just a matter of asking for a different assignment. Despite this, of those men called up to join killing brigades, very few took advantage of this option. Refusers had to take some initiative, to think for themselves and resist the need to conform.

Finally, there were resisters, those who actively opposed the genocide, but even here Bartlett raises a concern, saying that in many cases resisters were driven more by anger at offenders than empathy with victims. In any case, in terms of psychology, resisters were the odd ones out, being disengaged with the dominant ideas and values in their society and being able to be emotionally alone, without peer group support. Bartlett’s concern here meshes with research on why people join contemporary social movements: most first become involved via personal connections with current members, not because of moral outrage about the issue (Jasper, 1997).

The implication of Bartlett’s analysis of the Holocaust is that there is something wrong with humans who are psychologically normal (see also Bartlett, 2011, 2013). When those who actively resist genocide are unusual psychologically, this points to problems with the way most humans think and feel.

Another one of Bartlett’s conclusions is that most solutions that have been proposed to the problem of genocide — such as moral education, cultivating acceptance and respect, and reducing psychological projection — are vague, simplistic and impractical. They do not measure up to the challenge posed by the observed psychology of genocide.

Bartlett’s assessment of the Holocaust did not surprise me because, for one of my studies of tactics against injustice (Martin, 2007), I read a dozen books and many articles about the 1994 Rwandan genocide, in which between half a million and a million people were killed in the space of a few months. The physical differences between the Tutsi and Hutu are slight; the Hutu killers targeted both Tutsi and “moderate” Hutu. It is not widely known that Rwanda is the most Christian country in Africa, yet many of the killings occurred in churches where Tutsi had gone for protection. In many cases, people killed neighbours they had lived next to for years, or even family members. The Rwandan genocide had always sounded horrific; reading detailed accounts to obtain examples for my article, I discovered it was far worse than I had imagined (Martin, 2009).

After investigating evidence about genocide and its implications about human psychology, Bartlett turns to terrorism. Many of his assessments accord with critical terrorism studies, for example that there is no standard definition of terrorism, the fear of terrorism is disproportionate to the threat, and terrorism is “framework-relative” in the sense that calling someone a terrorist puts you in opposition to them.

Bartlett’s interest is in the psychology of terrorists. He is sceptical of the widespread assumption that there must be something wrong with them psychologically, and cites evidence that terrorists are psychologically normal. Interestingly, he notes that there are no studies comparing the psychologies of terrorists and soldiers, two groups that each use violence to serve a cause. He also notes a striking absence: in counterterrorism writing, no one has studied the sorts of people who refuse to be involved in cruelty and violence and who are resistant to appeals to in-group prejudice, which is usually called loyalty or patriotism. By assuming there is something wrong with terrorists, counterterrorism specialists are missing the possibility of learning how to deal with the problem.

Bartlett on War Psychology

Relatively few people are involved in genocide or terrorism except by learning about them via media stories. It is another matter when it comes to war, because many people have lived through a time when their country has been at war. In this century, just think of Afghanistan, Iraq and Syria, where numerous governments have sent troops or provided military assistance.

Bartlett says there is plenty of evidence that war evokes powerful emotions among both soldiers and civilians. For some, it is the time of life when they feel most alive, whereas peacetime can seem boring and meaningless. Although killing other humans is proscribed by most moral systems, war is treated as an exception. There are psychological preconditions for organised killing, including manufacturing differences, dehumanising the enemy, nationalism, group identity and various forms of projection. Bartlett says it is also important to look at psychological factors that prevent people from trying to end wars.

Even though relatively few people are involved in war as combat troops or even as part of the systems that support war-fighting, an even smaller number devote serious effort to trying to end wars. Governments collectively spend hundreds of billions of dollars on their militaries but only a minuscule amount on furthering the causes of peace. This applies as well to research: there is a vastly more military-sponsored or military-inspired research than peace-related research. Bartlett concludes that, “war is a pathology which the great majority of human beings do not want to cure” (p. 211).

Thinking back over the major wars in the past century, in most countries it has been far easier to support war than to oppose it. Enlisting in the military is seen as patriotic whereas refusing military service, or deserting the army, is seen as treasonous. For civilians, defeating the enemy is seen as a cause for rejoicing, whereas advocating an end to war — except via victory — is a minority position.

There have been thousands of war movies: people flock to see killing on the screen, and the bad guys nearly always lose, especially in Hollywood. In contrast, the number of major films about nonviolent struggles is tiny — what else besides the 1982 film Gandhi? — and seldom do they attract a wide audience. Bartlett sums up the implications of war for human psychology:

By legitimating the moral atrocity of mass murder, war, clothed as it is in the psychologically attractive trappings of patriotism, heroism, and the ultimately good cause, is one of the main components of human evil. War, because it causes incalculable harm, because it gives men and women justification to kill and injure one another without remorse, because it suspends conscience and neutralizes compassion, because it takes the form of psychological epidemics in which dehumanization, cruelty, and hatred are given unrestrained freedom, and because it is a source of profound human gratification and meaning—because of these things, war is not only a pathology, but is one of the most evident expressions of human evil. (p. 225)

The Obedient Parasite

Bartlett next turns to obedience studies, discussing the famous research by Stanley Milgram (1974). However, he notes that such studies shouldn’t even be needed: the evidence of human behaviour during war and genocide should be enough to show that most human are obedient to authority, even when the authority is instructing them to harm others.

Another relevant emotion is hatred. Although hating is a widespread phenomenon — most recently evident in the phenomenon of online harassment (Citron, 2014) — Bartlett notes that psychologists and psychiatrists have given this emotion little attention. Hatred serves several functions, including providing a cause, overcoming the fear of death, and, in groups, helping build a sense of community.

Many people recognise that humans are destroying the ecological web that supports their own lives and those of numerous other species. Bartlett goes one step further, exploring the field of parasitology. Examining definitions and features of parasites, he concludes that, according to a broad definition, humans are parasites on the environment and other species, and are destroying the host at a record rate. He sees human parasitism as being reflected in social belief systems including the “cult of motherhood,” infatuation with children, and the belief that other species exist to serve humans, a longstanding attitude enshrined in some religions.

Reading The Pathology of Man, I was tempted to counter Bartlett’s arguments by pointing to the good things that so many humans have done and are doing, such as everyday politeness, altruism, caring for the disadvantaged, and the animal liberation movement. Bartlett could counter by noting it would be unwise to pay no attention to disease symptoms just because your body has many healthy parts. If there is a pathology inherent in the human species, it should not be ignored, but instead addressed face to face.

Remington 1858 Model Navy .36 Cap and Ball Revolver.
Image by Chuck Coker via Flickr / Creative Commons

 

Technologies of Political Control

Bartlett’s analysis of human evil, including that violence and cruelty are perpetrated mostly by people who are psychologically normal and that many humans obtain pleasure out of violence against other humans, can be applied to technology. The aim in doing this is not to demonise particular types or uses of technology but to explore technological systems from a different angle in the hope of providing insights that are less salient from other perspectives.

Consider “technologies of political control,” most commonly used by governments against their own people (Ackroyd et al., 1974; Wright, 1998). These technologies include tools of torture and execution including electroshock batons, thumb cuffs, restraint chairs, leg shackles, stun grenades and gallows. They include technologies used against crowds such as convulsants and infrasound weapons (Omega Foundation, 2000). They include specially designed surveillance equipment.

In this discussion, “technology” refers not just to artefacts but also to the social arrangements surrounding these artefacts, including design, manufacture, and contexts of use. To refer to “technologies of political control” is to invoke this wider context: an artefact on its own may seem innocuous but still be implicated in systems of repression. Repression here refers to force used against humans for the purposes of harm, punishment or social control.

Torture has a long history. It must be considered a prime example of human evil. Few species intentionally inflict pain and suffering on other members of their own species. Among humans, torture is now officially renounced by every government in the world, but it still takes place in many countries, for example in China, Egypt and Afghanistan, as documented by Amnesty International. Torture also takes place in many conventional prisons, for example via solitary confinement.

To support torture and repression, there is an associated industry. Scientists design new ways to inflict pain and suffering, using drugs, loud noises, disorienting lights, sensory deprivation and other means. The tools for delivering these methods are constructed in factories and the products marketed around the world, especially to buyers seeking means to control and harm others. Periodically, “security fairs” are held in which companies selling repression technologies tout their products to potential buyers.

The technology of repression does not have a high profile, but it is a significant industry, involving tens of billions of dollars in annual sales. It is a prime cause of human suffering. So what are people doing about it?

Those directly involved seem to have few moral objections. Scientists use their skills to design more sophisticated ways of interrogating, incarcerating and torturing people. Engineers design the manufacturing processes and numerous workers maintain production. Sales agents tout the technologies to purchasers. Governments facilitate this operation, making extraordinary efforts to get around attempts to control the repression trade. So here is an entire industry built around technologies that serve to control and harm defenceless humans, and it seems to be no problem to find people who are willing to participate and indeed to tenaciously defend the continuation of the industry.

In this, most of the world’s population are bystanders. Mass media pay little attention. Indeed, there are fictional dramas that legitimise torture and, more generally, the use of violence against the bad guys. Most people remain ignorant of the trade in repression technologies. For those who learn about it, few make any attempt to do something about it, for example by joining a campaign.

Finally there are a few resisters. There are groups like the Omega Research Foundation that collect information about the repression trade and organisations like Amnesty International and Campaign Against Arms Trade that campaign against it. Journalists have played an important role in exposing the trade (Gregory, 1995).

The production, trade and use of technologies of repression, especially torture technologies, provide a prime example of how technologies can be implicated in human evil. They illustrate quite a few of the features noted by Bartlett. There is no evidence that the scientists, engineers, production workers, sales agents and politician allies of the industry are anything other than psychologically normal. Indeed, it is an industry organised much like any other, except devoted to producing objects used to harm humans.

Nearly all of those involved in the industry are simply operating as cogs in a large enterprise. They have abdicated responsibility for causing harm, a reflection of humans’ tendency to obey authorities. As for members of the public, the psychological process of projection provides a reassuring message: torture is only used as a last result against enemies such as terrorists. “We” are good and “they” are bad, so what is done to them is justified.

Weapons and Tobacco

Along with the technology of repression, weapons of war are prime candidates for being understood as implicated in evil. If war is an expression of the human potential for violence, then weapons are a part of that expression. Indeed, increasing the capacity of weapons to maim, kill and destroy has long been a prime aim of militaries. So-called conventional weapons include everything from bullets and bayonets to bombs and ballistic missiles, and then there are biological, chemical and nuclear weapons.

Studying weaponry is a way of learning about the willingness of humans to use their ingenuity to harm other humans. Dum-dum bullets were designed to tumble in flight so as to cause more horrendous injuries on exiting a body. Brightly coloured land mines can be attractive to young children. Some of these weapons have been banned, while others take their place. In any case, it is reasonable to ask, what was going through the minds of those who conceived, designed, manufactured, sold and deployed such weapons?

The answer is straightforward, yet disturbing. Along the chain, individuals may have thought they were serving their country’s cause, helping defeat an enemy, or just doing their job and following orders. Indeed, it can be argued that scientific training and enculturation serve to develop scientists willing to work on assigned tasks without questioning their rationale (Schmidt, 2000).

Nuclear weapons, due to their capacity for mass destruction, have long been seen as especially bad, and there have been significant mass movements against these weapons (Wittner, 1993–2003). However, the opposition has not been all that successful, because there continue to be thousands of nuclear weapons in the arsenals of eight or so militaries, and most people seldom think about it. Nuclear weapons exemplify Bartlett’s contention that most people do not do much to oppose war — even a war that would devastate the earth.

Consider something a bit different: cigarettes. Smoking brings pleasure, or at least relief from craving, to hundreds of millions of people daily, at the expense of a massive death toll (Proctor, 2011). By current projections, hundreds of millions of people will die this century from smoking-related diseases.

Today, tobacco companies are stigmatised and smoking is becoming unfashionable — but only in some countries. Globally, there are ever more smokers and ever more victims of smoking-related illnesses. Cigarettes are part of a technological system of design, production, distribution, sales and use. Though the cigarette itself is less complex than many military weapons, the same questions can be asked of everyone involved in the tobacco industry: how can they continue when the evidence of harm is so overwhelming? How could industry leaders spend decades covering up their own evidence of harm while seeking to discredit scientists and public health officials whose efforts threatened their profits?

The answers draw on the same psychological processes involved in the perpetuation of violence and cruelty in more obvious cases such as genocide, including projection and obedience. The ideology of the capitalist system plays a role too, with the legitimating myths of the beneficial effects of markets and the virtue of satisfying consumer demand.

For examining the role of technology in evil, weapons and cigarettes are easy targets for condemnation. A more challenging case is the wide variety of technologies that contribute to greenhouse gas emissions and hence to climate change, with potentially catastrophic effects for future generations and for the biosphere. The technologies involved include motor vehicles (at least those with internal combustion engines), steel and aluminum production, home heating and cooling, and the consumption of consumer goods. The energy system is implicated, at least the part of it predicated on carbon-based fuels, and there are other contributors as well such as fertilisers and clearing of forests.

Most of these technologies were not designed to cause harm, and those involved as producers and consumers may not have thought of their culpability for contributing to future damage to the environment and human life. Nevertheless, some individuals have greater roles and responsibilities. For example, many executives in fossil fuel companies and politicians with the power to reset energy priorities have done everything possible to restrain shifting to a sustainable energy economy.

Conceptualising the Technology of Evil

If technologies are implicated in evil, what is the best way to understand the connection? It could be said that an object designed and used for torture embodies evil. Embodiment seems appropriate if the primary purpose is for harm and the main use is for harm, but seldom is this sort of connection exclusive of other uses. A nuclear weapon, for example, might be used as an artwork, a museum exhibit, or a tool to thwart a giant asteroid hurtling towards earth.

Another option is to say that some technologies are “selectively useful” for harming others: they can potentially be useful for a variety of purposes but, for example, easier to use for torture than for brain surgery or keeping babies warm. To talk of selective usefulness instead of embodiment seems less essentialist, more open to multiple interpretations and uses.

Other terms are “abuse” and “misuse.” Think of a cloth covering a person’s face over which water is poured to give a simulation of drowning, used as a method of torture called waterboarding. It seems peculiar to say that the wet cloth embodies evil given that it is only the particular use that makes it a tool to cause harm to humans. “Abuse” and “misuse” have an ignominious history in the study of technology because they are often based on the assumption that technologies are inherently neutral. Nevertheless, these terms might be resurrected in speaking of the connection between technology and evil when referring to technologies that were not designed to cause harm and are seldom used for that purpose.

Consider next the role of technologies in contributing to climate change. For this, it is useful to note that most technologies have multiple uses and consequences. Oil production, for example, has various immediate environmental and health impacts. Oil, as a product, has multitudinous uses, such as heating houses, manufacturing plastics and fuelling military aircraft. The focus here is on a more general impact via the waste product carbon dioxide that contributes to global warming. In this role, it makes little sense to call oil evil in itself.

Instead, it is simply one player in a vast network of human activities that collectively are spoiling the environment and endangering future life on earth. The facilitators of evil in this case are the social and economic systems that maintain dependence on greenhouse gas sources and the psychological processes that enable groups and individuals to resist a shift to sustainable energy systems or to remain indifferent to the issue.

For climate change, and sustainability issues more generally, technologies are implicated as part of entrenched social institutions, practices and beliefs that have the potential to radically alter or destroy the conditions for human and non-human life. One way to speak of technologies in this circumstance is as partners. Another is to refer to them as actors or actants, along the lines of actor-network theory (Latour, 1987), though this gives insufficient salience to the psychological dimensions involved.

Another approach is to refer to technologies as extensions of humans. Marshall McLuhan (1964) famously described media as “extensions of man.” This description points to the way technologies expand human capabilities. Vehicles expand human capacities for movement, otherwise limited to walking and running. Information and communication technologies expand human senses of sight, hearing and speaking. Most relevantly here, weapons expand human capacities for violence, in particular killing and destruction. From this perspective, humans have developed technologies to extend a whole range of capacities, some of them immediately or indirectly harmful.

In social studies of technology, various frameworks have been used, including political economy, innovation, social shaping, cost-benefit analysis and actor-network theory. Each has advantages and disadvantages, but none of the commonly used frameworks emphasises moral evaluation or focuses on the way some technologies are designed or used for the purpose of harming humans and the environment.

Implications

The Pathology of Man is a deeply pessimistic and potentially disturbing book. Probing into the psychological foundations of violence and cruelty shows a side of human behaviour and thinking that is normally avoided. Most commentators prefer to look for signs of hope, and would finish a book such as this with suggestions for creating a better world. Bartlett, though, does not want to offer facile solutions.

Throughout the book, he notes that most people prefer not to examine the sources of human evil, and so he says that hope is actually part of the problem. By continually being hopeful and looking for happy endings, it becomes too easy to avoid looking at the diseased state of the human mind and the systems it has created.

Setting aside hope, nevertheless there are implications that can be derived from Bartlett’s analysis. Here I offer three possible messages regarding technology.

Firstly, if it makes sense to talk about human evil in a non-metaphorical sense, and to trace the origins of evil to features of human psychology, then technologies, as human creations, are necessarily implicated in evil. The implication is that a normative analysis is imperative. If evil is seen as something to be avoided or opposed, then likewise those technologies most closely embodying evil are likewise to be avoided or opposed. This implies making judgements about technologies. In technologies studies, this already occurs to some extent. However, common frameworks, such as political economy, innovation and actor-network theory, do not highlight moral evaluation.

Medical researchers do not hesitate to openly oppose disease, and in fact the overcoming of disease is an implicit foundation of research. Technology studies could more openly condemn certain technologies.

Secondly, if technology is implicated in evil, and if one of the psychological processes perpetuating evil is a lack of recognition of it and concern about it, there is a case for undertaking research that provides insights and tools for challenging the technology of evil. This has not been a theme in technology studies. Activists against torture technologies and military weaponry would be hard pressed to find useful studies or frameworks in the scholarship about technology.

One approach to the technology of evil is action research (McIntyre 2008; Touraine 1981), which involves combining learning with efforts towards social change. For example, research on the torture technology trade could involve trying various techniques to expose the trade, seeing which ones are most fruitful. This would provide insights about torture technologies not available via conventional research techniques.

Thirdly, education could usefully incorporate learning about the moral evaluation of technologies. Bartlett argues that one of the factors facilitating evil is the low moral development of most people, as revealed in the widespread complicity in or complacency about war preparation and wars, and about numerous other damaging activities.

One approach to challenging evil is to increase people’s moral capacities to recognise and act against evil. Technologies provide a convenient means to do this, because human-created objects abound in everyday life, so it can be an intriguing and informative exercise to figure out how a given object relates to killing, hatred, psychological projection and various other actions and ways of thinking involved in violence, cruelty and the destruction of the foundations of life.

No doubt there are many other ways to learn from the analysis of human evil. The most fundamental step is not to turn away but to face the possibility that there may be something deeply wrong with humans as a species, something that has made the species toxic to itself and other life forms. While it is valuable to focus on what is good about humans, to promote good it is also vital to fully grasp the size and depth of the dark side.

Acknowledgements

Thanks to Steven Bartlett, Lyn Carson, Kurtis Hagen, Kelly Moore and Steve Wright for valuable comments on drafts.

Contact details: bmartin@uow.edu.au

References

Ackroyd, Carol, Margolis, Karen, Rosenhead, Jonathan, & Shallice, Tim (1977). The technology of political control. London: Penguin.

Baron-Cohen, Simon (2011). The science of evil: On empathy and the origins of cruelty. New York: Basic Books.

Bartlett, Steven James (2005). The pathology of man: A study of human evil. Springfield, IL: Charles C. Thomas.

Bartlett, Steven James (2011). Normality does not equal mental health: the need to look elsewhere for standards of good psychological health. Santa Barbara, CA: Praeger.

Bartlett, Steven James (2013). The dilemma of abnormality. In Thomas G. Plante (Ed.), Abnormal psychology across the ages, volume 3 (pp. 1–20). Santa Barbara, CA: Praeger.

Baumeister, Roy F. (1997). Evil: Inside human violence and cruelty. New York: Freeman.

Citron, D.K. (2014). Hate crimes in cyberspace. Cambridge, MA: Harvard University Press.

Gregory, Martyn (director and producer). (1995). The torture trail [television]. UK: TVF.

Jasper, James M. (1997). The art of moral protest: Culture, biography, and creativity in social movements. Chicago: University of Chicago Press.

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.

Martin, Brian (1984). Uprooting war. London: Freedom Press.

Martin, Brian (2001). Technology for nonviolent struggle. London: War Resisters’ International.

Martin, Brian (2007). Justice ignited: The dynamics of backfire. Lanham, MD: Rowman & Littlefield.

Martin, Brian (2009). Managing outrage over genocide: case study Rwanda. Global Change, Peace & Security, 21(3), 275–290.

McIntyre, Alice (2008). Participatory action research. Thousand Oaks, CA: Sage.

McLuhan, Marshall (1964). Understanding media: The extensions of man. New York: New American Library.

Milgram, Stanley (1974). Obedience to authority. New York: Harper & Row.

Omega Foundation (2000). Crowd control technologies. Luxembourg: European Parliament.

Peck, M. Scott (1988). People of the lie: The hope for healing human evil. London: Rider.

Proctor, Robert N. (2011). Golden holocaust: Origins of the cigarette catastrophe and the case for abolition. Berkeley, CA: University of California Press.

Schmidt, Jeff (2000). Disciplined minds: A critical look at salaried professionals and the soul-battering system that shapes their lives. Lanham, MD: Rowman & Littlefield.

Touraine, Alain (1981). The voice and the eye: An analysis of social movements. Cambridge: Cambridge University Press.

Wittner, Lawrence S. (1993–2003). The struggle against the bomb, 3 volumes. Stanford, CA: Stanford University Press.

Wright, Steve (1998). An appraisal of technologies of political control. Luxembourg: European Parliament.

Author Information: Steve Breyman, Rensselaer Polytechnic Institute, breyms@rpi.edu

Breyman, Steve. “The Superior Lie: A Review of The Deceptive Activist.Social Epistemology Review and Reply Collective 6, no. 11 (2017): 36-38.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Ox

Please refer to:

Image credit: Irene Publishing

Brian Martin’s work is unique among scholars in Science and Technology Studies. He is not bashful about the sort of world he prefers, and steers his inquiries directly into hotly contested public controversies. From scientific struggles over the cause of HIV/AIDS to the theoretical best form of democracy, Martin weighed in. Sure, many of us wear our hearts on our sleeves; his scholarship—spread over sixteen books and hundreds of articles—has a practical, applied bent exceedingly rare among academics in any field.

The Deceptive Activist—Martin’s latest—is scrupulously documented, and an excellent example of his signature easy style. The book is highly readable, and flows smoothly. Sensibly constructed, Martin’s arguments and evidence are complex and sophisticated; there are no easy answers to be found here.

Civically Relevant Dissembling

This is not Brian Martin’s first foray into political lying (the subject of a 2014 article; access his work here). His aim this time around: “to highlight the tensions around activism, openness and honesty” (3). The stuff of the book is a veritable primer on all manner of civically relevant dissembling. Chapters 2 and 3 provide a typology of lies, from the everyday to the official. He discusses the difference between openness and honesty, and includes lies of omission. Withholding the truth may in some cases be as damaging as a bald-faced lie. I was once bound by a strictly enforced “honor code” and it carved out space for ‘socially acceptable’ lies. Martin naturally includes those “little white lies” too.

The stakes matter. Official deception is worse than individual deception because officials have more power. This includes lying by police (expressly permitted by criminal courts in United States). While generally preferring openness and honesty, it’s OK to lie to save human lives. Martin includes a timely discussion of “sock puppets” (people pretending they’re someone else on line) given a young Swede’s infiltration of fascist groups in Europe and the US.

While Martin does not directly address “fake news,” he provides an interesting and useful typology of propaganda. Martin dissects the varieties of government propaganda, explaining how politicians employ public relations specialists to twist and manipulate information conveyed to voters. The book includes a road map for uncovering official deception—devised to reduce outrage—using the notorious Nazi T4 euthanasia program as example. We learn to be cautious about public scandals given that some are manufactured by the political enemies of the politician in question. This may be a variety of “fake news” after all. Along the way, we learn never to trust authorities when they claim not to be influenced by social movements working hard to pressure them.

We’re introduced to various sorts of self-deception, including the collective sort Martin assigns to scientists who still push the public perception of their profession as value-free, objective and dispassionate. Martin understands that his thorough cataloging of the universe of lies could easily lead some to become cynical and reject everything that comes out of the mouths of corporate chieftains and politicians. To guard against over-skepticism, he provides a manual for lie detection in Chapter 4.

It’s virtually impossible for most of us to use visual cues to detect lies (US Secret Service agents appear reasonably good at it); Martin has us instead look at a speaker’s record, and a number of other clues summarized in Table 4.1 (64). It’s a helpful list that I wish American journalists had to hand during the run-up to the US invasion of Iraq when official mendacity ran amok.

Donald Trump’s brazen disregard for truth requires no guide to expose. One need only unearth an earlier tweet or previous statement that directly contradicts the current claim, an easy task. Americans may yet again have cause to use Martin’s clues in the future should we ever return to the normal regime of lies tougher to detect. The dawning of the post-truth era in a growing number of country’s politics does not excuse us from seriously grappling with the issues raised in the book.

Martin would have us view truth-telling as one virtue among others, and he shows how it sometimes clashes with the others. But there are times when telling the truth gets one in trouble as Martin shows with several examples where Gandhi’s truth-telling was exploited first by the British, then the Japanese (97-100). Martin conjures several scenarios where lying is superior to the truth and counsels against an absolutist position. He believes a relativist position morally superior to absolutism as it can prevent violence and other harms. His case studies (Chapter 6) end up making a good case for situational ethics and contingent morality.

Honesty and Lies

Activists ought to discuss honesty within their groups thinks Martin. Interestingly, he compares the features for effective nonviolent action he identified in an earlier work to lying, and suggests that one may lie “nonviolently.” His examples range from the satire and provocation of The Yes Men, to the classic case of sheltering a refugee from the Nazis.

I’ve not confronted most of these same tensions around (dis)honesty in my own activism, and I don’t think many of us have. Why bother lying? The truth—defined as the overwhelming majority of the genuine, as opposed to “alternative,” facts—is on our side. This imbalance explains why we devote our time, energy and resources to civic engagement. It also explains why activists are big fans of sunshine laws and freedom of information statutes.

Martin asks whether direct action advocates should share their plans with the police, wondering whether failing to do so constitutes a lie of omission. He realizes at the same time that to do so might compromise the action in advance. The dilemma is generally not difficult to resolve. The activists have a specific goal in mind (to urge climate action, or stop a natural gas pipeline) and do not believe any means is justified to reach their end. And as with other forms of civil disobedience, participants are prepared to face the legal consequences of their action. Activists thus face the wrath of the state in either scenario, whether they divulge their plans or not. Should there be a “lie” here, it hurt no one and those who were party to it are held responsible for it.

Martin is concerned that corporations and the state are not alone in their efforts to manage and interpret information to serve their own purposes. Exaggeration and hype are certainly issues for progressive organizations. I receive communications from social movement organizations on a daily basis that could be said to be one-sided or overblown. Activists too engage in spin doctoring. They are, after all, advocates for a cause. This does not, of course, grant them a license to lie, and they likely should sometimes tone down their “messaging.” But these normal exaggerations are about tone or still uncertain consequences (of, for example, climate change) not about the science, the “truth,” underlying the initial worry. Nevertheless, in certain relatively rare circumstances—some of special concern to Martin who has written and acted broadly and deeply on whistleblowing—veritas is at stake.

Should whistleblowers see themselves as akin to those engaged in nonviolent direct action, where the latter courageously face the fallout from their actions? Such a stance would result in dire personal and professional consequences, despite the protections in place in several countries. Whistleblowers prefer their complaints be handled through formal channels, but will go to the news media should that fail or not be a realistic option (as in the case of Chelsea Manning). Martin joins many of the rest of us in seeing the Daniel Ellsbergs and Edward Snowdens not as deceptive activists but rather as heroes for taking such grave personal risks.

The book closes with a lessons learned chapter. Martin summarizes his lessons regarding honesty and openness. He’s never preachy, looks at all sides, and is cautious in his advice. His sound advice, however, overlooked an inescapable fact all activists must face: the truth matters in public life but who wins and who loses is determined not by right but by might.

References

Martin, Brian. The Deceptive Activist. Sparsnas, Sweden: Irene Publishing, 2017.

Nelson, Gregory. “Putting The Deceptive Activist into Conversation: A Review and a Response to Rappert.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 33-35.

Rappert, Brian. “Brian Martin’s The Deceptive Activist: A Review.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 52-55.

Author Information: Gregory Nelson, Northern Arizona University, nelsong@vt.edu

Nelson, Gregory. “Putting The Deceptive Activist into Conversation: A Review and a Response to Rappert.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 33-35.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Oe

Please refer to:

Image credit: Irene Publishing

The Deceptive Activist
Brian Martin
Irene Publishing (Creative Commons Attribution 2.0)
168 pp.
http://www.bmartin.cc/pubs/17da/index.html

Brian Martin’s The Deceptive Activist begins a critical and timely commentary on the role and use of lying and deception in the realm of politics. According to Martin, lying and deception are as mutually constitutive of social interactions as technologies of truth-telling. Lying and truth-telling are two sides of the same coin of communication. Instead of depreciating lying and deception as things to avoid on Kantian moral grounds Martin makes the case that lying and deceit are quotidian and fundamental and natural to human communication.

Martin wants readers to strategically think about the role of lying and deception using context dependent analysis of how deception can be beneficial in certain circumstances. Martin “…aims in this book to highlight the tensions around activism, openness and honesty.”[1] The central argument of the book is that lying and deception are critical and routinely deployed tools that activists use to pursue social change. Instead of debating the moral status of deception in a zero-sum game he asks readers to think of role of deception by strategically analyzing the use of the means of lying and deceit vis à vis an end goal of effecting political change through non-violence and harm reduction.

A Proper Forum

In Brian Rappert’s review of Brain Martin’s The Deceptive Activist Rappert raises the critical question of the proper forum for having a discussion on a book about deception and the use of deception in society. Rappert’s call for a forum for this discussion cannot be overstated. The use of deception is a slippery slope as its use requires an evaluation of the means deployed and the ends desired. History is rife with examples of noble attempts to pursue noble ends using means that in the end become revealed as ethically compromised and corrupting of the whole project. Rappert’s review of The Deceptive Activist lays the ground for the emergence of a discussion. Certainly a book review cannot begin to address all of the careful, meticulous, and robust debate and discussion needed to begin to formulate an emergent discussion on lying and deception in more neutral and strategic ways, however, we can begin to use Martin’s work as an opportunity to acknowledge the pervasive role of deception even in the circles of activists who promote justice, peace, compassion, and empathy.

It would be beneficial to develop an edited volume on lying and deception in society. Science and Technology Studies offers us the ability to conceptualize lying and deception as social and political technologies deployed in the wielding of power. The nuance that Martin’s account brings is the readiness to discuss these technologies as useful tools in activist endeavors to pursue their ideals of change and justice. Martin gives readers frequent examples of how powerful actors use deception to control narratives of their activities in order to positively influence the perception of their image. For Martin the crucial work “…should be to work out when deception is necessary or valuable.”[2] He proposes a criteria of evaluation to evaluate when deception should be deployed based on “harm, fairness, participation, and prefiguration.”[3] His criteria is applicable to activist decisions of when to keep a secret, leak information, plan an action, communicate confidentially, infiltrate the opposition, deploying masks at a protest, or circulating disinformation about a political opponent.

However, in a world in which deception is normalized, his criteria runs the risk of ignoring how deceit, when mobilized by powerful actors, can threaten the less powerful. Developing a means to evaluate deploying deception should be organized by small groups of activists without a way to condemn the use of deceit by the powerful to harm the less powerful leaves the reader wanting more. Martin’s criteria were developed specifically to evaluate when deception might be justified by activist groups who have asymmetrical power relations to the wielders of state and corporate power. The tension that emerges from Martin’s book is between the use of deception by small groups in contrast to large and highly centralized powerful state authorities. Martin explains, “By being at the apex of a bureaucratic organization or prestige system, authorities have more power and a greater ability to prevent any adverse reactions due to deceptions that serve their interests.”[4]

Deception and Defactualization

Martin attempts to negotiate around this problem of recognizing deception as an important tool in activist struggles while also condemning history’s greatest abuses of deception by defining an assessment criteria to evaluate the context and nuance of when deception should be used in according to an ethic of minimal harm. Martin suggests “… assessments are dependent on the context. Still, there are considerable differences in the possible harms involved.” The way out of the ethical tensions that arise when those seeking to do good use the means of deception is to turn to assessing “situations according to the features of effective nonviolent action.”[5] I am not convinced that this enough to effectively deal with the dilemmas that arise when the power of deception is harnessed even in search of what are seemingly good and just ends. After all do we want to live in a world in which the ends justify the means, or the means become the ends in themselves? I can think of plenty examples in which this type of thinking bleeds.

Martin’s work calls us to reconsider the critiques of deception developed by Hannah Arendt in the Crisis of the Republic. Ardent writes, “In the realm of politics, where secrecy and deliberate deception have always played a significant role, self-deception is the danger par excellence; the self-deceived deceiver loses all contact with not only his audience, but also the real world, which still will catch up with him, because he can remove his mind from it but not his body.”[6] The dangerous step in the use of the means and power of deception in the pursuit of just ends lies in the corruption of those ends through defactualization.

Defactualization is a term used by Arendt in which the self-deceived loses the ability to distinguish between fact and fiction. The defactualization of the world, created by the self-deceiver, engulfs them because no longer can the self-deceiver see reality as it stands. The self-deceiver accommodates the facts to suit his or her assumptions: the process of defactualization. The actor becomes blind through his lies and can no longer distinguish truth and false. Martin does not leave a critique of self-deception by the way side, but his brief treatment of it at the end of his work forces us to find the space in which we can have a more robust and developed conversation per Rappert’s concern.

In the post-truth world, The Deceptive Activist is an immensely powerful work that helps to propel us to critically and strategically examine deception, in our own practices, in the era of the grand master of deception: Trump. Daily we are bombarded by various deceptions through the President’s Twitter. Exposing the number of Trump’s lies from inauguration crowd size to healthcare to climate change to taxes is a tiresome and arduous task. When one lie is exposed another is already communicated. The extensive amount of lies leveraged on a daily basis deflates the power of activists to expose and reveal the lies.

In the post-truth era the spectacle of exposing lies and deceptions has become so routine it loses meaning and becomes part of the static of public discourse on contemporary events. There is no more shock value in the exposure of lies. Lying is normalized to the point of meaninglessness. While Martin’s work demonstrates crucial analysis into the how lying and deception are fundamental to everyday interactions, the acceptance of this reality should be constantly questioned and critically analyzed. The Deceptive Activist carefully paints a spectrum of how lying is used in everyday human relationships to reflect on the need for activists to practice critical self-analysis of the methods of deception they often deploy in their agendas to pursue change in society. Martin concludes by discussing what so concerned Hannah Arendt over 50 years ago: self-deception. This even more dangerous form of deception should be questioned. In the Trumpian age we must find the space to have discussions on deception, lying, and defactualization while resisting the temptation to self-deceive.

References

Arendt, Hannah. Crises of the Republic; Lying in Politics, Civil Disobedience on Violence, Thoughts on Politics, and Revolution. 1st ed. ed.  New York: Harcourt Brace Jovanovich, 1972.

Martin, Brian. The Deceptive Activist. Sparsnas, Sweden: Irene Publishing, 2017.

Rappert, Brian. “Brian Martin’s The Deceptive Activist: A Review.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 52-55.

[1] Brian Martin, The Deceptive Activist (Sparsnas, Sweden: Irene Publishing, 2017), 3.

[2] Ibid., 156.

[3] Ibid., 153.

[4] Ibid., 25.

[5] Ibid., 144.

[6] Hannah Arendt, Crises of the Republic; Lying in Politics, Civil Disobedience on Violence, Thoughts on Politics, and Revolution, 1st ed. ed. (New York: Harcourt Brace Jovanovich, 1972), 36.

Author Information: Brian Rappert, University of Exeter, B.Rappert@exeter.ac.uk

Rappert, Brian. “Brian Martin’s The Deceptive Activist: A Review.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 52-55.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Ml

Image credit: Irene Publishing

The Deceptive Activist
Brian Martin
Irene Publishing (Creative Commons Attribution 2.0)
168 pp.
http://www.bmartin.cc/pubs/17da/index.html

Saying things we don’t really mean. Omitting relevant considerations. Leaking. Making the best impression. Spinning. Just adding that little tail to the story that gets the laugh. Feigning. In The Deceptive Activist, Brian Martin extends an invitation to open to the myriad of ways in which dishonesty figures within day-to-day interactions and political life. The reasons for deception are presented as manifold as its manifestations. Higher purposes. Convenience. Loyalties. Face saving. Ideologies that mark what Noam Chomsky called ‘the bounds of thinkable thought’.

Being completely frank and with no reason to do otherwise, my judgement of The Deceptive Activist is that … well … more on this later.

The kind of invitation extended by this book is one that is as sobering as it is destabilising. Its core claims are two-fold: (1) deception is commonplace and (2) this applies to you too (admit it…). As such, ‘rather than sweeping the tensions under the carpet’, Martin argues, ‘it may be better to start talking about deception and about when it can serve worthwhile purposes’.[1]

Through use of case studies and other examples, The Deceptive Activist reasons through the pros and cons of not presenting it like it is, with particular reference to political activism. As elsewhere in his work, Martin’s goal is not trying to definitely specify appropriate conduct. Instead, he takes it as one of skilling up readers to think through possible courses of action. Towards this end, he recounts different frameworks for helping to determine when deception might be warranted. The framework accorded with most traction is one Martin previously developed for assessing nonviolent action. Dissimulation of various kinds might be appropriate depending on whether it is standard, limited in harm, voluntary, fair, what it prefigures (do means and ends align?), whether it opens up participation, and whether it is skilfully done.

For my part, I can recall few books that explicitly encouraged readers to think about when dishonesty may be the best policy. In this the argument is bold. It is not that talk of dissimulation is rare though, even with scholarly traditions. It has a long history in the canons of Western thought. Socrates’ enthusiasm for a ‘noble lie’ in The Republic is one well-known instance. Yet, as with so many other examples in political thought, this message of dishonesty was one aimed at elites of the day, not those seeking to challenge them.[2] To note this is to signal the way the pervasiveness of deception also comes accompanied by a sense of its boundaries. It has an endpoint or an end-person to which it is pursued. It is not hard to see why. Deception unbound provides no place for anyone to stand. For this reason, talk of being deceptive often entails appeals to truth.

As The Deceptive Activist elaborates, appeals to truth can entail deception too. Take the domain of scholarship. As Martin contends with reference to biomedical research, ‘even domains where truth-telling is vital can be plagued by passions, biases and the presence of vested interests. Whenever an area develops a reputation for honesty, it is predictable that interlopers will try to benefit from a false impression that they too are honest.’[3]

Taken together though, the pervasiveness of deception, its subtleness, and the potential for it to be present where it should be least prompt a question back to The Deceptive Activist: namely, is Martin trying to, well, beguile readers himself? To put it more bluntly, perhaps too bluntly, does The Deceptive Activist entail deception?

Consider some possible grounds. There are many claims to truth presented, often substantiated through citations to scholarship. Given the argument in The Deceptive Activist, though, these are prime candidates for where we might look for finessing. Charged controversies such as the torture at Abu Ghraib, the intentions of the public relations of firms, and the rationales for the machinations of US statecraft are recounted, and recounted in a language that makes definitive claims to have grasped how authorities attempted to dupe. Have the specific glossings of the topics given, it might be asked, perhaps scarified complexity for the sake of advancing the overall argument of The Deceptive Activist? Have any relevant considerations that might have given a different spin to these matters been excluded? Deliberately or otherwise? Or have considerations been left out that would impact on how definitely scholarship can resolve what counts as the truth, the whole truth, and nothing but the truth? The text of The Deceptive Activist itself suggests some grounds for caution about whether it is providing facts that fit the argument. While at times unpicking factual claims for what is going on behind them, at other times factual claims are taken as a solid bedrock for knowing. While at times questioning how motives are attributed to large organisations, at other times motivations are attributed.

Given the argument in The Deceptive Activist, rather than concentrating on whether deception is taking place in some more or less subtle ways, it would seem more important to ask whether any such dissimulation would be appropriate. How though to evaluate the potential for deception? Four options are:

Martin is not deceiving in the crafting of The Deceptive Activist, and…

… this is problematic because it stands as a refutation to the thesis of the necessity and even desirability of deception.
… this is not problematic because it illustrates the high standards possible for human conduct (even if calling into question a central premise of the book).

Martin is deceiving in the crafting of The Deceptive Activist, and…

… this is not problematic so long as he did so in-line with a framework such as the one for assessing nonviolent action.
… this is problematic because (a) truth-telling is vital in scholarship or (b) he is missing a trick in really getting to grips with the potential for deception.

Writing out of these options prompts a pause. It seems that having a serious debate about the appropriateness of the options would painfully grate against many of the mores projected as central to scholarly and political life – like an open hand scraping along a brick wall. Now, perhaps more so than in recent times, assertions of (self-)deception figure prominently in the arsenals of rhetorical put downs. Fake this, alternative that. Which side are you on? While The Deceptive Activist does not engage with the latest international parlance for fakery, and probably with good reason, many will likely interpret its arguments against this political context. It is time of clashing binaries of right of wrong, not fine lines.

Which institutions then might support a discussion about the place of deception, and too the place of deception in the analysis of deception? This is a weighty matter that cannot be addressed within the limited scope of a review essay. Turning the issues on their head though, we can ask instead whether a book review would be a good place to locate such a serious debate. Reviews such as this one don’t operate in a pristine space free from conventions. Instead, reviews help to define communities (a sense of ‘we’) and communities come to learn how to interpret reviews. Within the expectations of a review, a statement that notionally reads as stinging criticism or high praise might be taken as otherwise by seasoned community members.[4] Audiences may, in fact, bring a good deal of scepticism to what they read in book reviews because they judge them as a form of endorsement genre, or if not this then a place of petty one-upmanship, or a space where reviewers forward their pet ideas instead of dealing with the serious matters they are meant to be minding.[5] Perhaps it may be time too to start talking about dissimulation in reviews genres and when it can serve worthwhile purposes.

Where and how can we have a frank discussion about a book on deception, let alone about deception itself?[6]

References

Hanegraaff, Wouter J. Esotericism and the Academy. Cambridge: Cambridge University Press, 2012.

Lamberton, Robert. “The απόρρητος θεωρία and the Roles of Secrecy in the History of Platonism.” In Secrecy and Concealment, edited by Hans G. Kippenberg and Guy G. Stroumsa, 139-152. New York: E.J Brill, 1995.

[1] Page 4.

[2] Actually the story was more complicated. Since in his dialogues Socrates admonished the capacity of the written word to discover truth, scholars since have questioned why Plato reduced the dialogues by codifying them into writing. One theory is that Plato may have only written down certain teachings, teachings of lesser value. Whether a ‘Unwritten Doctrine’ of teaching existed and who it was shared with have been topics for conversation since the time, see Lamberton (1995) and Hanegraaff (2012).

[3] Page 58.

[4] So if you aren’t getting the joke, you aren’t getting the joke.

[5] Would it help to decode my writing or just confuse the situation further if I noted Brian Martin has been a stalwart colleague for over twenty years?

[6] My thanks to Claes-Fredrik Helgesson for the wording of this ending and comments on this review. And Brian Martin too.

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au

Martin, Brian. “An Experience with Vaccination Gatekeepers.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 27-33.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3fZ

Please refer to:

screen-shot-2016-10-08-at-13-45-24

Image credit: Jennifer Moo, via flickr

For those promoting vaccination, one option is censoring critics, but this could be counterproductive. The response of editors of two journals suggests that even raising this possibility is unwelcome.

Background

For several years, I have been writing about the Australian vaccination debate. My primary concern is to support free expression of views. Personally, I do not take a stand on vaccination.

The trigger for my interest was the activities of a citizens’ group named Stop the Australian Vaccination Network (SAVN), formed in 2009. SAVN’s explicit purpose was to silence and shut down a long-standing citizens’ group critical of vaccination, the Australian Vaccination Network (AVN). In several articles, I have described the methods used by SAVN including verbal abuse, harassment (especially via numerous complaints) and censorship.[1] Over decades of studying several public scientific controversies, I had never seen or heard about a campaign like SAVN’s using such sustained and diverse methods aimed at silencing a citizens’ group that was doing no more than expressing its viewpoint in public. (Campaigners on issues such as forestry who use direct action techniques such as blockades are sometimes met with violent repression.)

Prior to this, in 2007, I started supervising a PhD student, Judy Wilyman, who undertook a critical analysis of the Australian government’s vaccination policy. Judy was also active in making public comment. After the formation of SAVN, Judy came under attack. SAVNers criticised her thesis before it was finished and before they had seen it, and made complaints to the university. After Judy graduated and her thesis was posted online, a massive attack was mounted on her, her thesis, me as her supervisor and the University of Wollongong.[2] This involved prominent stories in the daily newspaper The Australian, hostile tweets and blogs, a petition and complaints, among other things.[3]

Here I report on a small spinoff experience that provides insight into thinking about vaccination issue. Two senior Australian public health academics, David Durrheim from the University of Newcastle and Alison Jones from the University of Wollongong, wrote a commentary published in the journal Vaccine.[4] They argued that academic freedom might need to be curtailed in cases in which public health is imperilled by academic work. Their specific concern was criticism of vaccination, and they mentioned two particular cases: Judy’s PhD thesis and a course taught at the University of Toronto by Beth Landau-Halpern.

Durrheim and Jones are established scholars with long publication records in their usual areas of research. However, in writing their commentary in Vaccine they ventured into social science. As I wrote in a previous article in the SERRC, Durrheim and Jones’ commentary was based on an inadequate sample, just two cases.[5] Furthermore, in both cases they appeared to rely on newspaper articles without obtaining independent assessments of the reliability of the information. Furthermore, they provided no evidence supporting the effectiveness of the measures they proposed to prevent unsound academic research and teaching on public health, nor examined the potential negative consequences of these measures, in particular for open inquiry. Ironically, in criticising allegedly unsound social-science teaching and research, they produced an unsound piece of social science writing.

I wrote a reply to Durrheim and Jones’ commentary and then contacted Vaccine about whether it would be suitable for submission. However, the editor-in-chief ruled that the journal would not publish replies to its published commentaries. This led me to publish my reply, along with an explanation of its context, in SERRC.[6] Beth Landau-Halpern wrote her own response.[7]

I then proposed to Vaccine to submit a commentary about the vaccination debate. The editor-in-chief asked for a summary of what I proposed. After receiving my summary, I was informed that the editor-in-chief (EiC) “has advised you can proceed to submission, however the EiC has requested a fresh viewpoint in the commentary which would add something new to the literature.” I prepared a short piece, “Should vaccination critics be silenced?,” making the case that censoring critics could be counterproductive. I submitted it through the usual online system, listing four potential referees. The managing editor told me that my submission would be handled by the editor-in-chief. Not long after, I received a form-letter rejection, a “desk reject,” including the following text:

We regret to inform you of our decision to decline your manuscript without offer of peer-review.

Vaccine receives a large number of submissions for which space constraints limit acceptance only to those with the highest potential impact within our vast readership. […]

If any specific comments on your paper are available, they are provided at the bottom of this message.

There were no comments on my submission. Normally, after submitting a proposed outline of points to be covered, I would have expected that my submission would be sent to referees, or at the very least that the editor would offer a justification for rejection without refereeing. My submission is reproduced below so that readers can judge its quality. Vaccine accepted Durrheim and Jones’ commentary three days after receipt, implying very rapid refereeing.

I next sent my commentary to the Journal of Public Health Policy. The co-editors soon wrote back declining my submission, saying “Perhaps you can find a journal with an audience for whom this material is new. If you submit it elsewhere, I suggest that you look at the attached article.” The attached article in one page argued that safety is important in vaccines, concluding “It will be far easier to achieve herd immunity when risks associated with vaccines are known to be so small that public confidence in the safety of vaccines is secure.”[8]

The co-editors’ reply perplexed me. I wrote back as follows:

I am not arguing for or against vaccination. Nor am I arguing about the benefits of herd immunity or measures taken to improve vaccination rates, the topics covered in the article you kindly sent.

My concern is about the wisdom of silencing critics, for example trying to block public talks, prevent speaking tours, shut down websites, force organisations to close and verbally attacking individuals to discourage them from making public comment. Possibly I did not spell this out clearly enough. Whether silencing critics using such methods is a good way to promote vaccination has seldom been addressed.

The co-editors responded:

We have followed vaccination policy and the problem with your comments about critics is that because the critics focus on decisions by parents and patients they strengthen the perception a person takes a vaccine to protect him or herself, rather than to protect the whole community.  You do not challenge that. Although not the focus of your submission, it gives some comfort to those who focus on protecting themselves or their children. Perhaps you can work around that problem, but your otherwise find [sic] submission does not do it.

The implication of this response is that any comment about vaccination that “gives some comfort to those who focus on protecting themselves or their children” is unwelcome. This sort of perspective, with herd immunity being an overriding concern, helps to explain the resistance of vaccination proponents to any analysis of attacks on vaccination critics.

My experience with just two journals is an inadequate basis for passing judgement about peer review and editorial decision-making concerning vaccination. However, it is compatible with there being a view that publishing anything that might be used by vaccine critics is to be avoided.

The vaccination controversy, like many other public scientific controversies, is highly polarised. Partisans on either side look for weaknesses in the positions of their opponents. It seems that even if censoring vaccination critics is counterproductive, raising this possibility is unwelcome among proponents. After all, it might give comfort to the critics.

Should Vaccination Critics Be Silenced? (submission to Vaccine and Journal of Public Health Policy)

Abstract

If vaccine critics seem to threaten public confidence in vaccination, one option is to censor them. However, given the decline in public trust in authorities, in health and elsewhere, a more viable long-term strategy is to accept open debate and build the capacity of citizens to make informed decisions.

Keywords: vaccination; critics; free speech; censorship

Ever since the earliest days of vaccination, there have been disputes about its effectiveness and safety. Today, although medical authorities almost universally endorse vaccination, opposition continues (Hobson-West, 2007). From the point of view of vaccination supporters, the question arises: what should be done about vaccine critics?

Proponents fear that if members of the public take vaccine critics too seriously, this may undermine confidence in vaccination and lead to a decline in vaccination rates and an increase in infectious disease. How to counter critics, though, is not clear, given that there are no studies systematically comparing different strategies.

One approach is simply to ignore critics, hoping that they will not have a significant impact. Another is to respectfully address concerns raised by parents and others on a case-by-case basis, depending on their level of opposition to vaccination, countering vaccine criticisms with relevant information (Danchin and Nolan, 2014; Leask et al., 2014). Then there is the option of trying to discredit and censor public vaccine critics, an approach used systematically in Australia for some years (Martin, 2015).

It may seem obvious that silencing critics is beneficial for maintaining high levels of vaccination. However, setting aside the ethics of censorship, there are several pragmatic reasons to question this strategy.

An initial problem is the lack of evidence that organized vaccine-critical groups are significant drivers of public attitudes towards vaccination. Although it seems plausible that efforts by these groups will induce more parents to decline vaccination, a different dynamic may be involved. It is possible that organized opposition is a reflection, rather than a major cause, of parental concerns that may be triggered by other reasons, for example awareness of apparent adverse reactions to vaccines or arrogant attitudes by doctors (Blume, 2006). There is some evidence for this view: a survey of members of the Australian Vaccination-skeptics Network showed that most had developed concerns about vaccination before becoming involved (Wilson, 2013).

Another problem is that trying to discredit vaccine critics can seem heavy-handed and trigger greater support for them in what is called the Streisand effect or censorship backfire (Jansen and Martin, 2015). The targets of censorship are likely to feel disgruntled, and suppression of their views provides ammunition for their claims that a cover-up is involved. When critics are attacked or silenced, some observers may conclude there is something being hidden.

Underlying the drive to censor criticism of vaccination can be a fear that members of the public cannot be relied upon to make sensible judgments based on the evidence and arguments. Instead, they must be protected from dangerous ideas and repeatedly told to trust authorities.

However, reliance on authority is a precarious basis for maintaining policy goals given evidence—though complex and contested—for a decline in respect for authorities over the past several decades in health (Shore, 2007) and other arenas (Gauchat, 2012; Inglehart, 1999). When education levels were lower and dominant institutions seldom questioned, it could be sufficient to assert authority and most people would follow. However, many authorities have been discredited in the public eye, for example politicians for lying about war-making, companies for lying about product hazards, and churches for covering up paedophilia among clergy. Although scientists and doctors remain among the more trusted groups in society, they are increasingly questioned too, with various scandals having tarnished their reputations.

In addition, the greater availability of information means far more people are educating themselves and challenging experts. This is not simply an Internet phenomenon. In the early years of the AIDS crisis in the US, activists studied research and organized to challenge officials over HIV drug policy (Epstein, 1996). Similarly, the women’s health movement challenged patriarchal orientations in the medical profession (Boston Women’s Health Book Collective, 1971). The questioning of dominant views has spread to a wide range of issues, including for example the health effects of genetically modified organisms and electromagnetic radiation.

Therefore, it is only to be expected that there will be increasing questioning of vaccination policies, especially when they are presented as a one-size-fits-all application brooking no dissent. In this context, attempts to suppress criticisms appear to be pushing against a social trend towards greater independent thinking.

Rather than continuing to rely on authority, a different approach is to encourage open discussion and to help parents and citizens to develop a more nuanced understanding of vaccination. If the evidence for vaccination is overwhelming, there should be little risk in assisting more people to understand it. The strategy behind this approach is to democratize expert knowledge about vaccines, so that uptake depends less on the authority of credentialed experts and more on the informed investigations of well-read members of the public.

Possible consequences of this approach are highlighting shortcomings in the vaccination paradigm, for example the possibility that adverse effects are more common than normally acknowledged, and considering the possibility that childhood vaccination schedules could be modified according to individual risk factors. By being open to weaknesses in the standard recommendations and making changes in the light of concerns raised, the more important recommendations may be protected in the longer term. This would be in accord with the general argument for free speech that it enables weak ideas to be challenged and a stronger case to be formulated (Barendt, 2005).

However, such openness to constructive debate will remain elusive so long as vaccine critics are stigmatized and marginalized. While the vaccination debate remains highly polarized, it is difficult for either side to make what seem to be concessions and almost impossible for there to be an open and honest engagement with those on the other side. If this remains the case, it is easy to predict that critics will persist despite (or perhaps because of) attempts to silence them, and people’s increasing expectation for educating themselves rather than automatically deferring to authorities will continue to confound vaccination proponents.

References

Barendt, Eric. Freedom of Speech. 2nd ed. Oxford: Oxford University Press, 2005.

Blume, Stuart. “Anti-Vaccination Movements and their Interpretations.” Social Science and Medicine 62, no. 3 (2006): 628–642.

Boston Women’s Health Book Collective. Our Bodies, Ourselves. Boston: New England Free Press, 1971.

Danchin, Margie and Terry Nolan. “A Positive Approach to Parents with Concerns about Vaccination for the Family Physician.” Australian Family Physician 43, no. 10 (2014): 690–694.

Durrheim, D. N., and A. L. Jones. “Public Health and the Necessary Limits of Academic Freedom?” Vaccine 34 (2016): 2467–2468.

Epstein, Steven. Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley, CA: University of California Press, 1996.

Freeman, Phyllis. “Commentary on Vaccines.” Public Health Reports 112 (January/February 1997): 21.

Gauchat, Gordon. “Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010.” American Sociological Review 77, no. 2 (2012): 167–187.

Hobson-West, Pru. “‘Trusting Blindly Can Be the Biggest Risk of All’: Organised Resistance to Childhood Vaccination in the UK.” Sociology of Health & Illness 29 (2007): 198–215.

Inglehart, Ronald. “Postmodernization Erodes Respect for Authority, but Increases Support for Democracy.” In Critical Citizens: Global Support for Democratic Government, edited by Pippa Norris, 236-256. Oxford: Oxford University Press, 1999.

Jansen, Sue Curry and Brian Martin. “The Streisand Effect and Censorship Backfire.” International Journal of Communication 9 (2015): 656–671.

Landau-Halpern, Beth. “The Costs and Consequences of Teaching and Analyzing Alternative Medicine.” Social Epistemology Review and Reply Collective 5, no. 9 (2016): 42–45.

Leask, Julie, Paul Kinnersley, Cath Jackson, Francine Cheate, Helen Bedford, and Greg Rowles. “Communicating with Parents about Vaccination: A Framework for Health Professionals.” BMC Pediatrics 12, no. 154 (2012). http://www.biomedcentral.com/1471-2431/12/154.

Martin, Brian. “Censorship and Free Speech in Scientific Controversies.” Science and Public Policy 42, no. 3 (2015): 377–386.

Martin, Brian. “An Orchestrated Attack on a PhD Thesis.” 1 February 2016a, http://comments.bmartin.cc/2016/02/01/an-orchestrated-attack-on-a-phd-thesis/.

Martin, Brian. “Public Health and Academic Freedom.” Social Epistemology Review and Reply Collective 5, no. 6 (2016b): 44–49.

Shore, David A., ed. The Trust Crisis in Healthcare: Causes, Consequences, and Cures. Oxford: Oxford University Press, 2007.

Wilson, Trevor. A Profile of the Australian Vaccination Network 2012. Bangalow, NSW: Australian Vaccination Network, 2013.

Wilyman, Judy. “A Critical Analysis of the Australian Government’s Rationale for its Vaccination Policy.” PhD thesis, University of Wollongong, 2015. http://ro.uow.edu.au/theses/4541/.

[1] See http://www.bmartin.cc/pubs/controversy.html#vaccination for my publications and commentary on the vaccination controversy.

[2] Wilyman, “A Critical Analysis of the Australian Government’s Rationale for its Vaccination Policy.”

[3] Martin, “An Orchestrated Attack on a PhD Thesis.”

[4] Durrheim and Jones, “Public Health and the Necessary Limits of Academic Freedom.”

[5] Martin, “Public Health and Academic Freedom.”

[6] Ibid.

[7] Landau-Halpern, “The Costs and Consequences of Teaching and Analyzing Alternative Medicine.”

[8] Freeeman, “Commentary on Vaccines.”

Author Information: Beth Landau-Halpern, beth.landauhalpern@gmail.com

Landau-Halpern, Beth. “The Costs and Consequences of Teaching and Analyzing Alternative Medicine.” Social Epistemology Review and Reply Collective 5, no. 9 (2016): 42-45.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3dD

Please refer to:

the_needle

Image credit: Partha S. Sahana, via flickr

Thank you for the opportunity to comment on Brian Martin’s article “Public Health and Academic Freedom,”[1] written in response to Durrheim and Jones’ “Public health and the Necessary Limits of Academic Freedom?”[2] in which the authors argue that concerns for public health should curtail the operations of normal academic freedom. I am the Canadian mentioned in the article, the instructor of a course at the University of Toronto that the media came to call the “Anti-Vaccination Course” in a series of articles seemingly based on hysteria and ideology, rather than an informed understanding of the content of the course, the ideas presented within that course, or the place of the course within the context of the Health Studies program that offered it.  Continue Reading…

Author Information: Kevin Dew, Victoria University of Wellington, kevin.dew@vuw.ac.nz

Dew, Kevin. “Public Health and the Necessary Limits of Advocacy.” Social Epistemology Review and Reply Collective 5, no. 7 (2016): 26-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-36e

Please refer to:

vaccination

Image credit: USAID Asia, via flickr

Whilst I was an academic member of a department of public health I gained a great deal of respect for my colleagues. For most of them there was a strong sense of social justice underlying their work and a commitment to improving the health of the population. Brian Martin makes a compelling argument decrying the poor scholarship and argumentation offered by two Australian public health academics who have misrepresented the work of one of his PhD students. Professors David Durrheim and Alison Jones (2016) lapse into the unscientific argument that if research findings are critical of public health policy then public health policy people should have the power to suppress them.  Continue Reading…

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au

Martin, Brian. “Public Health and Academic Freedom.” [1] Social Epistemology Review and Reply Collective 5, no. 6 (2016): 44-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-347

serum_vials

Image credit: savard.photo, via flickr

In December 2015, Judy Wilyman received her PhD from the University of Wollongong. I was her principal supervisor. On 11 January, her thesis was posted on the university’s digital repository, and soon the onslaught began. A hostile article appeared on the front page of the national newspaper The Australian (Loussikian 2016), the first of several attacking articles. As well, there were hostile blogs and tweets, a petition with more than 2000 signatures, and alteration of Wikipedia entries, among other actions.  Continue Reading…

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au

Martin, Brian. “Constructing and Investigating Absences in Knowledge.” Social Epistemology Review and Reply Collective 3, no. 5 (2014): 73-81.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1r2

Please refer to:

Images of Absence

When we think of presence and absence, what mental image comes to mind? Consider the differences between absence as a body covered by clothing and absence as a field before a building is constructed. A body has a fair degree of continuity: from one encounter to the next, the body is much the same, though perhaps dressed differently. We know that a body exists, and we can imagine what it would look like. Is this a useful metaphor for the particular absence of knowledge called undone science (Frickel 2014), namely research that citizen campaigners would like to be carried out but hasn’t been? Continue Reading…