Archives For ethics

Author Information: Manuel Padilla-Cruz, Universidad de Sevilla, mpadillacruz@us.es.

Padilla-Cruz, Manuel. “On the Pragmatic and Conversational Features of Venting: A Reply to Thorson and Baker.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 21-30.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-46B

Image by Rolf Dietrich Brecher via Flickr / Creative Commons

 

Juli Thorson and Christine Baker have recently set the spotlight on a verbal activity which, in their view, may yield rather positive outcomes in oppressive or discriminating environments: venting. This is claimed to play a significant role in fighting epistemic damage.

Although their discussion is restricted to cases in which women vent to other women who are acquainted with unfair epistemic practices in the asymmetrical and hierarchical social groups to which they belong, in “Venting as epistemic work” the authors contend that successful venting can make people aware of oppressive social structures, their place in them and possible solutions for the epistemic damage that those structures cause.

As a result, venting enables individuals to regain trust in their epistemic practices, author knowledge, and accept their own epistemic personhood (Thorson and Baker 2019: 8).

Damage, Personhood, and Venting

Thorson and Baker’s (2019) argumentation relies on two crucial elements. On the one hand, the notion of epistemic damage, which, in an analogous way to Tessman’s (2001) concept of moral damage, is defined as a harm curtailing an individual’s epistemic personhood. This is in turn described as the individual’s “[…] ontological standing as a knower”, “[…] the ability to author knowledge for [oneself]” (Thorson and Baker 2019: 2), or, in Borgwald’s words, “[…] the ability to think autonomously, reflect on and evaluate one’s emotion, beliefs, desires, and to trust those judgments rather than deferring to others” (2012: 73).

Epistemic damage hampers the development of a person’s knowledge-generating practices and her self-trust in her ability to implement them (Thorson and Baker 2019: 2).[1] It is inflicted when someone cannot assert her epistemic personhood because she fears that what she says will not be taken seriously. Consequently, the victim suffers testimonial smothering (Dotson 2011) and gets her self-trust diminished and her epistemic personhood undermined.

On the other hand, the authors’ argumentation is based on a differentiation of venting from both complaining and ranting. These three verbal actions are depicted as contingent on the presence of an audience, expressing “strong feelings” and conveying “agitation about some state of affairs or person” (Thorson and Baker 2019: 3), but neither complaining nor ranting are believed to involve expectations for a change in the state of affairs that gave rise to them.

Complaints, the authors say, may be left unaddressed or the solution proposed for their cause may turn out unsatisfactory and leave it unfixed, while ranting is “a kind of performance for someone” (Thorson and Baker 2019: pp.) where the ranter, far from engaging in conversation, simply exposes her views and expresses anger through a verbal outburst without concern for an ensuing reaction. Venting, in contrast, is portrayed as a testifying dialogical action that is typically performed, Thorson and Baker (2019: 4-5) think, in face-to-face interaction and where the venter does have firm expectations for subsequent remedial action against a state of affairs: denied uptake, sexist comments, silencing or undermining of cognitive authority, to name but some.

By expressing anger at (an)other individual(s) who wronged her or frustrated confusion at their actions, the venter seeks to make her audience aware of an epistemic injustice –either testimonial or hermeneutical– which negatively affects her epistemic personhood and to assert her own credibility.

Thorson and Baker (2019: 7) also distinguish two types of venting, even if these are not clear-cut and range along a continuum:

  1. Heavy-load venting, which is a lengthy, time-consuming and dramatic activity following a serious threat to epistemic personhood increasing self-distrust. It aims for recognition of credibility and reaffirmation of epistemic personhood.
  2. Maintenance venting, which is a “honing practice” requiring less epistemic work and following situations where there are “lack of uptake, dismissal, or micro-aggressions” (Thorson and Baker 2019: 7). Its goal is reinforcement or maintenance of epistemic personhood.

Despite their valuable insights, a series of issues connected with the features defining venting and characterising its two types deserve more detailed consideration in order to gain a fuller understanding of the reasons why venting can actually have the positive effects that the authors attribute to it.

Firstly, its ontology as a verbal action or speech act (Austin 1962; Searle 1969) needs ascertaining in depth with a view to properly delimiting it and adequately differentiating it from other related actions. Secondly, in addition to length and goal, a further criterion should be provided so as to more accurately characterise heavy-load and maintenance venting. Addressing the first issue will help unravel what venting really is and how it is accomplished, while dealing with the second one is fundamental for capturing the subtleties individuating the two types of venting.

What follows addresses these issues from two disciplines of linguistics: pragmatics to a great extent and conversation analysis to a lesser extent. The former, which is greatly indebted to the philosophy of language, looks into, among others, how individuals express meaning and perform a variety of actions verbally, as well as how they interpret utterances and understand meaning.

More precisely, the issues in question will be accounted for on the grounds of some postulates of Speech Act Theory (Austin 1962; Searle 1969) and contributions on complaints made from this framework. Conversation analysis, in turn, examines how individuals structure their verbal contributions with a view to transmitting meaning and how conversational structure determines interpretation. Although Thorson and Baker (2019) admit that an analysis of venting from a linguistic perspective would be fruitful, unfortunately, they did not undertake it.

1) Venting as a Speech Act

Thorson and Baker (2019) take venting, complaining and ranting to be three distinct speech acts that have in common the expression of anger. To some extent, this is right, as there is much confusion in the literature and researchers consider venting and ranting the same speech act “[…] and use the terms synonymously” (Signorelli 2017: 16). However, venting and ranting could rather be regarded as sub-types or variations of a broader, more general or overarching category of speech act: complaining.

Venting and ranting satisfy in the same way as complaining four of the twelve criteria proposed by Searle (1975) in order to distinguish specific verbal actions: namely, those pertaining to the illocutionary point of the act, the direction of fit between the speaker’s words and the external world, the expressed psychological state and the propositional content of the utterance(s) whereby a verbal action is attempted. In other words, complaining, venting and ranting share similar features stemming from the speaker’s intentionality, the relationship between what she says and the external world, her psychological state while speaking and the core meaning or import of what she says. Complaining would then be an umbrella category subsuming both venting and ranting, which would differ from it along other dimensions.

1.1) Pragmatic and Conversational Features of Complaints

Pragmatists working within the fruitful speech act-theoretic tradition (Austin 1962; Searle 1969, 1975) have made illuminating contributions on complaints, which they have classified as expressive acts wherewith the speaker, or complainer or complainant, expresses a variety of negative feelings or emotions. This is a relevant aspect unveiling illocutionary point or intentionality. Such feelings or emotions include anger, irritation, wrath, frustration, disappointment, dissatisfaction, discontent, discomfort, anxiety, despair, etc.

This is another key point, but it shows this time the expressed psychological state (Edmondson and House 1981; Laforest 2002; Edwards 2005). In fact, the expression of such feelings and emotions –a further important issue linked now to the communicated propositional content (“I am angry at/disappointed by p”)– differentiates complaints from other expressive acts like complimenting, where the expressed psychological states are positive: admiration, approval, appraisal, etc. (Wolfson and Manes 1980; Manes and Wolfson 1981).

The feelings and emotions voiced by the complainer concern some state of affairs –another person’s behaviour, appearance, traits, mood, etc., an event and, evidently, some injustice, too– which is regarded not to meet (personal) expectations or standards, or to violate sociocultural norms. The state of affairs originating the complaint is referred to as the complainable and is assessed or appraised from the complainer’s point of view, so complaints often involve a high degree of subjectivity (Edmondson and House 1981; Boxer 1993a, 1993b; Trosborg 1995). As expressive acts, complaints lack direction of fit: neither do they reflect the outer world, nor is this affected by or adapted to what the complainer says.

However, complaints could also be considered to some extent informative or representative acts, inasmuch as the complainer may make the hearer –or complainee– aware of the unsatisfactory state of affairs, which might have gone completely unnoticed or be utterly unknown to him. If so, complaints would be hybrid acts combining the expression of psychological states and the dispensing of information. Accordingly, they could have a words-to-world direction of fit because what the complainer says matches the world, at least from her perspective.

Complaints can be subdivided in various manners. A first twofold division can be made depending on whether the complainable pertains to the complainee or not. Thus, direct complaints concern a state of affairs for which the complainee is held responsible, while indirect complaints deal with one whose responsibility lies in a third party, who may be present at the conversational exchange or absent (Edmondson and House 1981; Boxer 1993a, 1993b; Trosborg 1995).

Another twofold distinction may be made depending on whether the complainer simply voices her feelings or has further intentions. Hence, complaints are retrospective acts when she just expresses her psychological states about some recent or past state of affairs without further intentions, or prospective when she also seeks to influence the complainee and bias his (future) course of action (Márquez Reiter 2005).

In discoursive, conversational terms, complaints can be made through just a single sentence that is produced as an utterance counting as the core act, or through more than one sentence and utterance, either in the same conversational turn or in different ones. Additional utterances make up pre-sequences or post-sequences, depending on their position relative to the core act, or moves, a label frequently used in the literature on conversation analysis.

Since they often lend support to the complaint by offering further details about the complainable, giving reasons for the complainer’s feelings and/or informing about her expections, those moves work as supportive moves. A core complaint and the possible supportive moves accompanying it are often arranged in adjacency pairs along with the utterances reacting to them, whereby the complainee agrees, shows his own psychological states, elaborates on the complaint or responds to it (Cutting 2002; Sidnell 2010; Padilla Cruz 2015; Clift 2016).

1.2) Characterising Venting

Following this characterisation, venting can be said to be a type of complaining on the basis of the following features: its topic or aboutness, its target, the participation of (an)other individual(s), dialogicality, length, the newness or known nature of its subject matter, and the predominance of the expressive and representative functions or the fulfilment of an additional influential or conative function. Of these, the first three features are fundamental, while the fourth and the fifth ones may be regarded as consequences of the third feature. Whereas the sixth one facilitates differentiation between types of venting, the seventh enables recognition of intentions other than simply voicing feelings about recent or past states of affairs.

Although solely produced by one individual –the complainer or venter– venting would be an indirect form of complaining that “[…] reveals underlying perspectives [on] a given topic, situation, or individual(s)” (Signorelli 2017: 2) and engages (an)other individual(s) who must share the assessment of, perspective on and feelings about the complainable, as well as be in a position to react in a particular manner or intend to do so in the (immediate) future.

Their sharing such viewpoints and feelings may prompt participation in the discoursive or conversational episode through tokens of agreement or commiseration, enquiries aimed at getting additional information about the complainable, further verbalisation of negative feelings through additional censuring, critique or irritated comments, and expression of commitment to future remedial action (Boxer 1993a).

Therefore, venting could be depicted as a dialogic phenomenon that is achieved discoursively and requires conversation, to which (an)other participant(s) contribute(s). As Signorelli puts it, “[…] venting is deliberately and necessarily communal” (2017: 17) and can therefore be described as a type of “participatory genre” (2017: 16) with a specific purpose, recognisable moves and characteristic rhetorical strategies (2017: 1).

Its dependency on the contribution of some other epistemic agent(s) makes venting be a cooperative action that is co-constructed by means of the joint endeavours of the venter and her audience. Its dialogic nature causes conversation to unfold through more than just one turn or adjacency pair, so venting episodes may be (considerably) longer variations of complaints, which may be performed by means of just one utterance or a brief sequence of utterances that is normally followed by reactions or responses.

Hence, venting would require more effort, time and verbal material enabling the venter to elaborate on her viewpoints, clearly express her feelings, refine, revise or deepen into the subject matter, and/or announce or hint her expectations. Through it the venter seeks to secure her audience’s future collaboration, which renders venting a long form of prospective complaining. In turn, the audience may show understanding, indicate their positioning as regards what is talked about and/or reveal their future intentions.

1.3) Venting and Related Actions

Venting cannot be judged to differ from complaining on the grounds of the likelihood for a solution to a problem to exist or to be plausible, as Thorson and Baker (2019) conjecture. If a solution to a problem actually exists, that is something external, extralinguistic. Whether the solution is worked out or sought for, and ends up being administered or not, are perlocutionary effects (Austin 1962) that escape the venter’s control. Indeed, although perlocutionary effects may be intended or expected, and, hence, insinuated and pursued through what is said and how it is said, whether a particular solution to a problem is actually given or not is something that totally falls under the audience’s control. Venting nevertheless displays pragmatic and conversational properties that single it out as a special manifestation of complaining.

On the other hand, venting is also distinct from ranting in that, regardless of whether ranting is a direct or indirect form of complaint, it initially excludes the participation of the audience. Ranting, therefore, is chiefly a monologic speech action characterised by its length and detail, too, but deprived of joint cooperation. It mainly is an “[…] individualistic production of identity” (Vrooman 2002: 63, quoted in Signorelli 2017) that is “[…] rooted in self-styling” (Signorelli 2017: 12) and whose mission is “[…] to establish and defend a position of social distance” (Signorelli 2017: 13).

If something distinguishes ranting, that may be the intensity, vividness and high level of irritation or agitation wherewith the complainable is presented, which results in a verbal outburst, as Thorson and Baker (2019) rightly put it. Relying on Searle’s (1975) parametres to classify speech acts, the strength with which ranting is performed certainly differentiates it from venting and also sorts it out as a peculiar manifestation of complaining. Ranting, then, differs from venting on the grounds of its narrative nature and emotional intensity (Manning 2008: 103-105; Lange 2014: 59, quoted in Signorelli 2017).

2) The Two Types of Venting

As pointed out, Thorson and Baker (2019) differentiate between heavy-load and maintenance venting. In their view, the former arises when nothing or very little is known about a disappointing, frustrating, irritating or unfair issue. The venter’s action, then, seems to be mainly aimed at informing her audience and giving details about the issue in question, as well as at making them aware of her feelings.

In turn, maintenance venting appears to correspond to the sort of trouble talk (Jefferson 1984, 1988) in which people engage every now and then when they are already acquainted with some negative issue. This distinction, therefore, may be refined by taking into account the informational load of each action, or, to put it differently, its informativeness, i.e. the newness or known nature of the complainable (Padilla Cruz 2006).

In informational terms, heavy-load venting may be more informative because either what is talked about is utterly unknown to the audience or both the venter and her audience are familiarised with it, but have not dealt with it beforehand. Both the informative –or representational– and the expressive function play a major role in this sort of venting: along with conveying her feelings the venter also dispenses information, the possession of which by the audience she considers is in her interest.

The informativeness of maintenance venting, in contrast, would be lesser, as the venter and her audience are already acquainted with a troublesome or disrupting state of affairs because they have previously discussed it in previous encounters. Although this type of venting still fulfils an informative or representational function, this is subservient to the expressive function and to an additional one: affirming or strengthening common viewpoints and feelings (Padilla Cruz 2004a, 2004b, 2005). This is essential for aligning the audience with herself or positioning them along with her as regards the complainable.

The low level of informativeness of maintenance venting and the affirmation or reinforcement of common viewpoints that it achieves render this sort of venting a phatic action in the sense of anthropologist Bronisław K. Malinowski (1923). It is of little informational relevance, if this is understood to amount to the newness or unknown nature of information, and, therefore, of scarce importance to the audience’s worldview. Even if maintenance venting does not significantly improve or alter their knowledge about the vented issue, like phatic discourse, it does nevertheless fulfil a crucial function: creating or stressing social affinity, rapport, bonds of union, solidarity and camaraderie between the venter and her audience (Padilla Cruz 2004a, 2004b, 2005).

These effects stem from venting’s implication that the interlocutors brought together have similar viewpoints and feelings about a problematic or unfair state of affairs. Maintenance venting, so to say, insinuates or highlights that the interlocutors may be equally affected by what is talked about, expect a similar reaction or react to it in a similar manner. It fosters a feeling of in-group membership through a topic with which the interlocutors are equally acquainted, which similarly impacts them and towards which they also hold akin attitudes (Padilla Cruz 2006).

Conclusion

Venting satisfies criteria that enable its classification as a manifestation of complaining behaviour. Owing to its target –a third party– topic –some recent or past state of affairs– and fulfilment of expressive, representative and conative functions, it amounts to an indirect prospective form of complaint. Its conversational features make it exceed average complaints made through just one conversational turn or adjacency pair, so venting requires more time and effort. However, if there are characteristics significantly distinguishing venting, these are dialogicality and engagement of the audience.

Venting certainly depends on the presence and participation of the audience. It must be jointly or cooperatively accomplished through dialogue, so it must be seen and portrayed as a communal action that is discoursively achieved. The audience’s participation is crucial for both the acknowledgement of a troublesome state of affairs and the achievement of the ultimate goal(s) sought for by the venter: fighting or eradicating the state of affairs in question. While dialogicality and participation of the audience facilitate differentiation between venting and another type of complaint, namely, ranting, the level of informativeness of what is vented helps more accurately distinguish between heavy-load and maintenance venting.

It is along these pragmatic and conversational features that venting may be more precisely described from a linguistic perspective. Although this description may certainly enrich our understanding of why it may have the effects that Thorson and Baker (2019) ascribe to it, other issues still need considering. They are left aside for future work.

Contact details: mpadillacruz@us.es

References

Austin, John L. How to Do Things with Words. Oxford: Oxford University Press, 1962.

Borgwald, Kristin. “Women’s Anger, Epistemic Personhood, and Self-Respect: An Application of Lehrer’s Work on Self-Trust.” Philosophical Studies 161 (2012): 69-76.

Boxer, Diana. “Complaints as Positive Strategies: What the Learner Needs to Know.” TESOL Quarterly 27 (1993a): 277-299.

Boxer, Diana. “Social Distance and Speech Behaviour: The Case of Indirect Complaints.” Journal of Pragmatics 19 (1993b): 103-105.

Clift, Rebecca. Conversation Analysis. Cambridge: Cambridge University Press, 2016.

Cutting, Joan. Pragmatics and Discourse. A Resource Book for Students. London: Routledge, 2002.

Dotson, Kristie. “Tracking Epistemic Violence, Tracking Practices of Silencing.” Hypatia 26, no. 2 (2011): 236-257.

Edmondson, Willis, and Juliane House. Let’s Talk and Talk about It. München: Urban & Schwarzenberg, 1981.

Edwards, Derek. “Moaning, Whinging and Laughing: The Subjective Side of Complaints.” Discourse Studies 7 (2005): 5-29.

Jefferson, Gail. “On Stepwise Transition from Talk about a Trouble to Inappropriately Next-positioned Matters.” In Structures of Social Action. Studies in Conversation Analysis, edited by J. Maxwell Atkinson and John Heritage, 191-222. Cambridge: Cambridge University Press, 1984.

Jefferson, Gail. “On the Sequential Organization of Troubles-Talk in Ordinary Conversation.” Social Problems 35, no. 4 (1988): 418–441.

Laforest, Marty. “Scenes of Family Life: Complaining in Everyday Conversation.” Journal of Pragmatics 34 (2002): 1595-1620.

Lange, Patricia G. “Commenting on YouTube Rants: Perceptions of Inappropriateness or Civic Engagement?” Journal of Pragmatics 73 (2014): 53-65.

Malinowski, Bronisław K. “The Problem of Meaning in Primitive Languages.” In The Meaning of Meaning. A Study of the Influence of Language upon Thought and of the Science of Symbolism, edited by Charles K. Ogden and Ivor A. Richards, 451-510. New York: Harcourt, Brace & Company, INC, 1923.

Manes, Joan, and Nessa Wolfson. “The Compliment Formula.” In Conversational Routine. Explorations in Standardized Communication Situations and Prepatterned Speech, edited by Florian Coulmas, 115-132. The Hague: Mouton, 1981.

Manning, Paul. “Barista Rants about Stupid Customers at Starbucks: What Imaginary Conversations Can Teach Us about Real Ones.” Language & Communication 28, no. 2 (2008): 101-126.

Márquez Reiter, Rosina. “Complaint Calls to a Caregiver Service Company: The Case of Desahogo.” Intercultural Pragmatics 2, no. 4 (2005): 481-514.

Padilla Cruz, Manuel. “Aproximación pragmática a los enunciados fáticos: enfoque social y cognitivo.” PhD diss., Universidad de Sevilla, 2004a.

Padilla Cruz, Manuel. “On the Social Importance of Phatic Utterances: Some Considerations for a Relevance-Theoretic Approach.” In Current Trends in Intercultural, Cognitive and Social Pragmatics, edited by Pilar Garcés Conejos, Reyes Gómez Morón, Lucía Fernández Amaya and Manuel Padilla Cruz, 199-216. Sevilla: Research Group “Intercultural Pragmatic Studies”, 2004b.

Padilla Cruz, Manuel. “On the Phatic Interpretation of Utterances: A Complementary Relevance-Theoretic Approach.” Revista Alicantina de Estudios Ingleses 18 (2005): 227-246

Padilla Cruz, Manuel. “Topic Selection for Phatic Utterances: A Relevance-Theoretic Approach.” In Usos sociales del lenguaje y aspectos psicolingüísticos: perspectivas aplicadas, edited by Joana Salazar Noguera, Mirian Amengual Pizarro and María Juan Grau, 249-256. Palma de Mallorca: Universitat de les Illes Balears, 2006.

Padilla Cruz, Manuel. “Pragmatics and Discourse Analysis.” In The Encyclopedia of Applied Linguistics, edited by Carol A. Chapelle, 1-6. Hoboken, N.J.: John Wiley & Sons, 2015.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Searle, John. “Indirect Speech Acts.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry Morgan, 59-82. New York: Academic Press, 1975.

Sidnell, Jack. Conversation Analysis. An Introduction. Oxford: Wiley-Blackwell, 2010

Signorelli, Julia A. “Of Pumpkin Spice Lattes, Hamplanets, and Fatspeak: The Venting Genre as Support and Subversion on Reddit’s r/Fatpeoplestories.” MA diss., The University of North Carolina at Charlotte.

Tessman, Lisa. “Critical Virtue Ethics: Understanding Oppression as Morally Damaging.” In Feminists Doing Ethics, edited by Peggy DesAultes and Joanne Waugh, 79-99. Lanham, MD: Rowman Littlefield, 2001.

Thorson, Juli, and Christine Baker. “Venting as Epistemic Work.” Social Epistemology. A Journal of Knowledge, Culture and Policy (2019).

Trosborg, Anna. Interlanguage Pragmatics. Requests, Complaints and Apologies. Berlin: Mouton de Gruyter, 1995.

Vrooman, Steven S. “The Art of Invective: Performing Identity in Cyberspace.” New Media & Society 4, no. 1 (2002): 51-70.

Wolfson, Nessa, and Joan Manes. “The Compliment as a Social Strategy.” Papers in Linguistics 13, no. 3 (1980): 391-410.

[1] The feminine third person singular personal pronoun will be used throughout this paper in order to refer to an individual adopting the role of speaker in conversational exchanges, while the masculine counterpart will be used to allude to the individual adopting that of hearer.

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Technology and Evil.” Social Epistemology Review and Reply Collective 8, no. 2 (2019): 1-14.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-466

A Russian Mil Mi-28 attack helicopter.
Image by Dmitri Terekhov via Flickr / Creative Commons

 

Humans cause immense damage to each other and to the environment. Steven James Bartlett argues that humans have an inbuilt pathology that leads to violence and ecosystem destruction that can be called evil, in a clinical rather than a religious sense. Given that technologies are human constructions, it follows that technologies can embody the same pathologies as humans. An important implication of Bartlett’s ideas is that studies of technology should be normative in opposing destructive technologies.

Introduction

Humans, individually and collectively, do a lot of terrible things to each other and to the environment. Some obvious examples are murder, torture, war, genocide and massive environmental destruction. From the perspective of an ecologist from another solar system, humans are the world’s major pestilence, spreading everywhere, enslaving and experimenting on a few species for their own ends, causing extinctions of numerous other species and destroying the environment that supports them all.

These thoughts suggest that humans, as a species, have been causing some serious problems. Of course there are many individuals and groups trying to make the world a better place, for example campaigning against war and environmental degradation, and fostering harmony and sustainability. But is it possible that by focusing on what needs to be done and on the positives in human nature, the seriousness of the dark side of human behaviour is being neglected?

Here, I address these issues by looking at studies of human evil, with a focus on a book by Steven Bartlett. With this foundation, it is possible to look at technology with a new awareness of its deep problems. This will not provide easy solutions but may give a better appreciation of the task ahead.

Background

For decades, I have been studying war, ways to challenge war, and alternatives to military systems (e.g. Martin, 1984). My special interest has been in nonviolent action as a means for addressing social problems. Along the way, this led me to read about genocide and other forms of violence. Some writing in the area refers to evil, addressed from a secular, scientific and non-moralistic perspective.

Roy Baumeister (1997), a prominent psychologist, wrote a book titled Evil: Inside Human Violence and Cruelty, that I found highly insightful. Studying the psychology of perpetrators, ranging from murderers and terrorists to killers in genocide, Baumeister concluded that most commonly they feel justified in their actions and see themselves as victims. Often they think what they’ve done is not that important. Baumeister’s sophisticated analysis aims to counter the popular perception of evil-doers as malevolent or uncaring.

Baumeister is one of a number of psychologists willing to talk about good and evil. If the word evil feels uncomfortable, then substitute “violence and cruelty,” as in the subtitle of Baumeister’s book, and the meaning is much the same. It’s also possible to approach evil from the viewpoint of brain function, as in Simon Baron-Cohen’s (2011) The Science of Evil: On Empathy and the Origins of Cruelty. There are also studies that combine psychiatric and religious perspectives, such as M. Scott Peck’s (1988) People of the Lie: The Hope for Healing Human Evil.

Another part of my background is technology studies, including being involved in the nuclear power debate, studying technological vulnerability, communication technology, and technology and euthanasia, among other topics. I married my interests in nonviolence and in technology by studying how technology could be designed and used for nonviolent struggle (Martin, 2001).

It was with this background that I encountered Steven James Bartlett’s (2005) massive book The Pathology of Man: A Study of Human Evil. Many of the issues it addresses, for example genocide and war, were familiar to me, but his perspective offered new and disturbing insights. The Pathology of Man is more in-depth and far-reaching than other studies I had encountered, and is worth bringing to wider attention.

Here, I offer an abbreviated account of Bartlett’s analysis of human evil. Then I spell out ways of applying his ideas to technology and conclude with some possible implications.

Bartlett on Evil

Steven James Bartlett is a philosopher and psychologist who for decades studied problems in human thinking. The Pathology of Man was published in 2005 but received little attention. This may partly be due to the challenge of reading an erudite 200,000-word treatise but also partly due to people being resistant to Bartlett’s message, for the very reasons expounded in his book.

In reviewing the history of disease theories, Bartlett points out that in previous eras a wide range of conditions were considered to be diseases, ranging from “Negro consumption” to anti-Semitism. This observation is part of his assessment of various conceptions of disease, relying on standard views about what counts as disease, while emphasising that judgements made are always relative to a framework that is value-laden.

This is a sample portion of Bartlett’s carefully laid out chain of logic and evidence for making a case that the human species is pathological, namely characteristic of a disease. In making this case, he is not speaking metaphorically but clinically. The fact that the human species has seldom been seen as pathological is due to humans adopting a framework that exempts themselves from this diagnosis, which would be embarrassing to accept, at least for those inclined to think of humans as the apotheosis of evolution.

Next stop: the concept of evil. Bartlett examines a wide range of perspectives, noting that most of them are religious in origin. In contrast, he prefers a more scientific view: “Human evil, in the restricted and specific sense in which I will use it, refers to apparently voluntary destructive behavior and attitudes that result in the general negation of health, happiness, and ultimately of life.” (p. 65) In referring to “general negation,” Bartlett is not thinking of a poor diet or personal nastiness but of bigger matters such as war, genocide and overpopulation.

Bartlett is especially interested in the psychology of evil, and canvasses the ideas of classic thinkers who have addressed this issue, including Sigmund Freud, Carl Jung, Karl Menninger, Erich Fromm and Scott Peck. This detailed survey has only a limited return: these leading thinkers have little to say about the origins of evil and what psychological needs it may serve.

So Bartlett turns to other angles, including Lewis Fry Richardson’s classic work quantifying evidence of human violence, and research on aggression by ethologists, notably Konrad Lorenz. Some insights come from this examination, including Richardson’s goal of examining human destructiveness without emotionality and Lorenz’s point that humans, unlike most other animals, have no inbuilt barriers to killing members of their own species.

Bartlett on the Psychology of Genocide

To stare the potential for human evil in the face, Bartlett undertakes a thorough assessment of evidence about genocide, seeking to find the psychological underpinning of systematic mass killings of other humans. He notes one important factor, a factor not widely discussed or even admitted: many humans gain pleasure from killing others. Two other relevant psychological processes are projection and splitting. Projection involves denying negative elements of oneself and attributing them to others, for example seeing others as dangerous, thereby providing a reason for attacking them: one’s own aggression is attributed to others.

Splitting involves dividing one’s own grandiose self-conception from the way others are thought of. “By belonging to the herd, the individual gains an inflated sense of power, emotional support, and connection. With the feeling of group-exaggerated power and puffed up personal importance comes a new awareness of one’s own identity, which is projected into the individual’s conception” of the individual’s favoured group (p. 157). As a member of a group, there are several factors that enable genocide: stereotyping, dehumanisation, euphemistic language and psychic numbing.

To provide a more vivid picture of the capacity for human evil, Bartlett examines the Holocaust, noting that it was not the only or most deadly genocide but one, partly due to extensive documentation, that provides plenty of evidence of the psychology of mass killing.

Anti-Semitism was not the preserve of the Nazis, but existed for centuries in numerous parts of the world, and indeed continues today. The long history of persistent anti-Semitism is, according to Bartlett, evidence that humans need to feel prejudice and to persecute others. But at this point there is an uncomfortable finding: most people who are anti-Semitic are psychologically normal, suggesting the possibility that what is normal can be pathological. This key point recurs in Bartlett’s forensic examination.

Prejudice and persecution do not usually bring sadness and remorse to the victimizers, but rather a sense of strengthened identity, pleasure, self-satisfaction, superiority, and power. Prejudice and persecution are Siamese twins: Together they generate a heightened and invigorated belief in the victimizers’ supremacy. The fact that prejudice and persecution benefit bigots and persecutors is often overlooked or denied. (p. 167)

Bartlett examines evidence about the psychology of several groups involved in the Holocaust: Nazi leaders, Nazi doctors, bystanders, refusers and resisters. Nazi leaders and doctors were, for the most part, normal and well-adjusted men (nearly all were men). Most of the leaders were above average intelligence, and some had very high IQs, and many of them were well educated and culturally sophisticated. Cognitively they were superior, but their moral intelligence was low.

Bystanders tend to do nothing due to conformity, lack of empathy and low moral sensibility. Most Germans were bystanders to Nazi atrocities, not participating but doing nothing to oppose them.

Next are refusers, those who declined to be involved in atrocities. Contrary to usual assumptions, in Nazi Germany there were few penalties for refusing to join killings; it was just a matter of asking for a different assignment. Despite this, of those men called up to join killing brigades, very few took advantage of this option. Refusers had to take some initiative, to think for themselves and resist the need to conform.

Finally, there were resisters, those who actively opposed the genocide, but even here Bartlett raises a concern, saying that in many cases resisters were driven more by anger at offenders than empathy with victims. In any case, in terms of psychology, resisters were the odd ones out, being disengaged with the dominant ideas and values in their society and being able to be emotionally alone, without peer group support. Bartlett’s concern here meshes with research on why people join contemporary social movements: most first become involved via personal connections with current members, not because of moral outrage about the issue (Jasper, 1997).

The implication of Bartlett’s analysis of the Holocaust is that there is something wrong with humans who are psychologically normal (see also Bartlett, 2011, 2013). When those who actively resist genocide are unusual psychologically, this points to problems with the way most humans think and feel.

Another one of Bartlett’s conclusions is that most solutions that have been proposed to the problem of genocide — such as moral education, cultivating acceptance and respect, and reducing psychological projection — are vague, simplistic and impractical. They do not measure up to the challenge posed by the observed psychology of genocide.

Bartlett’s assessment of the Holocaust did not surprise me because, for one of my studies of tactics against injustice (Martin, 2007), I read a dozen books and many articles about the 1994 Rwandan genocide, in which between half a million and a million people were killed in the space of a few months. The physical differences between the Tutsi and Hutu are slight; the Hutu killers targeted both Tutsi and “moderate” Hutu. It is not widely known that Rwanda is the most Christian country in Africa, yet many of the killings occurred in churches where Tutsi had gone for protection. In many cases, people killed neighbours they had lived next to for years, or even family members. The Rwandan genocide had always sounded horrific; reading detailed accounts to obtain examples for my article, I discovered it was far worse than I had imagined (Martin, 2009).

After investigating evidence about genocide and its implications about human psychology, Bartlett turns to terrorism. Many of his assessments accord with critical terrorism studies, for example that there is no standard definition of terrorism, the fear of terrorism is disproportionate to the threat, and terrorism is “framework-relative” in the sense that calling someone a terrorist puts you in opposition to them.

Bartlett’s interest is in the psychology of terrorists. He is sceptical of the widespread assumption that there must be something wrong with them psychologically, and cites evidence that terrorists are psychologically normal. Interestingly, he notes that there are no studies comparing the psychologies of terrorists and soldiers, two groups that each use violence to serve a cause. He also notes a striking absence: in counterterrorism writing, no one has studied the sorts of people who refuse to be involved in cruelty and violence and who are resistant to appeals to in-group prejudice, which is usually called loyalty or patriotism. By assuming there is something wrong with terrorists, counterterrorism specialists are missing the possibility of learning how to deal with the problem.

Bartlett on War Psychology

Relatively few people are involved in genocide or terrorism except by learning about them via media stories. It is another matter when it comes to war, because many people have lived through a time when their country has been at war. In this century, just think of Afghanistan, Iraq and Syria, where numerous governments have sent troops or provided military assistance.

Bartlett says there is plenty of evidence that war evokes powerful emotions among both soldiers and civilians. For some, it is the time of life when they feel most alive, whereas peacetime can seem boring and meaningless. Although killing other humans is proscribed by most moral systems, war is treated as an exception. There are psychological preconditions for organised killing, including manufacturing differences, dehumanising the enemy, nationalism, group identity and various forms of projection. Bartlett says it is also important to look at psychological factors that prevent people from trying to end wars.

Even though relatively few people are involved in war as combat troops or even as part of the systems that support war-fighting, an even smaller number devote serious effort to trying to end wars. Governments collectively spend hundreds of billions of dollars on their militaries but only a minuscule amount on furthering the causes of peace. This applies as well to research: there is a vastly more military-sponsored or military-inspired research than peace-related research. Bartlett concludes that, “war is a pathology which the great majority of human beings do not want to cure” (p. 211).

Thinking back over the major wars in the past century, in most countries it has been far easier to support war than to oppose it. Enlisting in the military is seen as patriotic whereas refusing military service, or deserting the army, is seen as treasonous. For civilians, defeating the enemy is seen as a cause for rejoicing, whereas advocating an end to war — except via victory — is a minority position.

There have been thousands of war movies: people flock to see killing on the screen, and the bad guys nearly always lose, especially in Hollywood. In contrast, the number of major films about nonviolent struggles is tiny — what else besides the 1982 film Gandhi? — and seldom do they attract a wide audience. Bartlett sums up the implications of war for human psychology:

By legitimating the moral atrocity of mass murder, war, clothed as it is in the psychologically attractive trappings of patriotism, heroism, and the ultimately good cause, is one of the main components of human evil. War, because it causes incalculable harm, because it gives men and women justification to kill and injure one another without remorse, because it suspends conscience and neutralizes compassion, because it takes the form of psychological epidemics in which dehumanization, cruelty, and hatred are given unrestrained freedom, and because it is a source of profound human gratification and meaning—because of these things, war is not only a pathology, but is one of the most evident expressions of human evil. (p. 225)

The Obedient Parasite

Bartlett next turns to obedience studies, discussing the famous research by Stanley Milgram (1974). However, he notes that such studies shouldn’t even be needed: the evidence of human behaviour during war and genocide should be enough to show that most human are obedient to authority, even when the authority is instructing them to harm others.

Another relevant emotion is hatred. Although hating is a widespread phenomenon — most recently evident in the phenomenon of online harassment (Citron, 2014) — Bartlett notes that psychologists and psychiatrists have given this emotion little attention. Hatred serves several functions, including providing a cause, overcoming the fear of death, and, in groups, helping build a sense of community.

Many people recognise that humans are destroying the ecological web that supports their own lives and those of numerous other species. Bartlett goes one step further, exploring the field of parasitology. Examining definitions and features of parasites, he concludes that, according to a broad definition, humans are parasites on the environment and other species, and are destroying the host at a record rate. He sees human parasitism as being reflected in social belief systems including the “cult of motherhood,” infatuation with children, and the belief that other species exist to serve humans, a longstanding attitude enshrined in some religions.

Reading The Pathology of Man, I was tempted to counter Bartlett’s arguments by pointing to the good things that so many humans have done and are doing, such as everyday politeness, altruism, caring for the disadvantaged, and the animal liberation movement. Bartlett could counter by noting it would be unwise to pay no attention to disease symptoms just because your body has many healthy parts. If there is a pathology inherent in the human species, it should not be ignored, but instead addressed face to face.

Remington 1858 Model Navy .36 Cap and Ball Revolver.
Image by Chuck Coker via Flickr / Creative Commons

 

Technologies of Political Control

Bartlett’s analysis of human evil, including that violence and cruelty are perpetrated mostly by people who are psychologically normal and that many humans obtain pleasure out of violence against other humans, can be applied to technology. The aim in doing this is not to demonise particular types or uses of technology but to explore technological systems from a different angle in the hope of providing insights that are less salient from other perspectives.

Consider “technologies of political control,” most commonly used by governments against their own people (Ackroyd et al., 1974; Wright, 1998). These technologies include tools of torture and execution including electroshock batons, thumb cuffs, restraint chairs, leg shackles, stun grenades and gallows. They include technologies used against crowds such as convulsants and infrasound weapons (Omega Foundation, 2000). They include specially designed surveillance equipment.

In this discussion, “technology” refers not just to artefacts but also to the social arrangements surrounding these artefacts, including design, manufacture, and contexts of use. To refer to “technologies of political control” is to invoke this wider context: an artefact on its own may seem innocuous but still be implicated in systems of repression. Repression here refers to force used against humans for the purposes of harm, punishment or social control.

Torture has a long history. It must be considered a prime example of human evil. Few species intentionally inflict pain and suffering on other members of their own species. Among humans, torture is now officially renounced by every government in the world, but it still takes place in many countries, for example in China, Egypt and Afghanistan, as documented by Amnesty International. Torture also takes place in many conventional prisons, for example via solitary confinement.

To support torture and repression, there is an associated industry. Scientists design new ways to inflict pain and suffering, using drugs, loud noises, disorienting lights, sensory deprivation and other means. The tools for delivering these methods are constructed in factories and the products marketed around the world, especially to buyers seeking means to control and harm others. Periodically, “security fairs” are held in which companies selling repression technologies tout their products to potential buyers.

The technology of repression does not have a high profile, but it is a significant industry, involving tens of billions of dollars in annual sales. It is a prime cause of human suffering. So what are people doing about it?

Those directly involved seem to have few moral objections. Scientists use their skills to design more sophisticated ways of interrogating, incarcerating and torturing people. Engineers design the manufacturing processes and numerous workers maintain production. Sales agents tout the technologies to purchasers. Governments facilitate this operation, making extraordinary efforts to get around attempts to control the repression trade. So here is an entire industry built around technologies that serve to control and harm defenceless humans, and it seems to be no problem to find people who are willing to participate and indeed to tenaciously defend the continuation of the industry.

In this, most of the world’s population are bystanders. Mass media pay little attention. Indeed, there are fictional dramas that legitimise torture and, more generally, the use of violence against the bad guys. Most people remain ignorant of the trade in repression technologies. For those who learn about it, few make any attempt to do something about it, for example by joining a campaign.

Finally there are a few resisters. There are groups like the Omega Research Foundation that collect information about the repression trade and organisations like Amnesty International and Campaign Against Arms Trade that campaign against it. Journalists have played an important role in exposing the trade (Gregory, 1995).

The production, trade and use of technologies of repression, especially torture technologies, provide a prime example of how technologies can be implicated in human evil. They illustrate quite a few of the features noted by Bartlett. There is no evidence that the scientists, engineers, production workers, sales agents and politician allies of the industry are anything other than psychologically normal. Indeed, it is an industry organised much like any other, except devoted to producing objects used to harm humans.

Nearly all of those involved in the industry are simply operating as cogs in a large enterprise. They have abdicated responsibility for causing harm, a reflection of humans’ tendency to obey authorities. As for members of the public, the psychological process of projection provides a reassuring message: torture is only used as a last result against enemies such as terrorists. “We” are good and “they” are bad, so what is done to them is justified.

Weapons and Tobacco

Along with the technology of repression, weapons of war are prime candidates for being understood as implicated in evil. If war is an expression of the human potential for violence, then weapons are a part of that expression. Indeed, increasing the capacity of weapons to maim, kill and destroy has long been a prime aim of militaries. So-called conventional weapons include everything from bullets and bayonets to bombs and ballistic missiles, and then there are biological, chemical and nuclear weapons.

Studying weaponry is a way of learning about the willingness of humans to use their ingenuity to harm other humans. Dum-dum bullets were designed to tumble in flight so as to cause more horrendous injuries on exiting a body. Brightly coloured land mines can be attractive to young children. Some of these weapons have been banned, while others take their place. In any case, it is reasonable to ask, what was going through the minds of those who conceived, designed, manufactured, sold and deployed such weapons?

The answer is straightforward, yet disturbing. Along the chain, individuals may have thought they were serving their country’s cause, helping defeat an enemy, or just doing their job and following orders. Indeed, it can be argued that scientific training and enculturation serve to develop scientists willing to work on assigned tasks without questioning their rationale (Schmidt, 2000).

Nuclear weapons, due to their capacity for mass destruction, have long been seen as especially bad, and there have been significant mass movements against these weapons (Wittner, 1993–2003). However, the opposition has not been all that successful, because there continue to be thousands of nuclear weapons in the arsenals of eight or so militaries, and most people seldom think about it. Nuclear weapons exemplify Bartlett’s contention that most people do not do much to oppose war — even a war that would devastate the earth.

Consider something a bit different: cigarettes. Smoking brings pleasure, or at least relief from craving, to hundreds of millions of people daily, at the expense of a massive death toll (Proctor, 2011). By current projections, hundreds of millions of people will die this century from smoking-related diseases.

Today, tobacco companies are stigmatised and smoking is becoming unfashionable — but only in some countries. Globally, there are ever more smokers and ever more victims of smoking-related illnesses. Cigarettes are part of a technological system of design, production, distribution, sales and use. Though the cigarette itself is less complex than many military weapons, the same questions can be asked of everyone involved in the tobacco industry: how can they continue when the evidence of harm is so overwhelming? How could industry leaders spend decades covering up their own evidence of harm while seeking to discredit scientists and public health officials whose efforts threatened their profits?

The answers draw on the same psychological processes involved in the perpetuation of violence and cruelty in more obvious cases such as genocide, including projection and obedience. The ideology of the capitalist system plays a role too, with the legitimating myths of the beneficial effects of markets and the virtue of satisfying consumer demand.

For examining the role of technology in evil, weapons and cigarettes are easy targets for condemnation. A more challenging case is the wide variety of technologies that contribute to greenhouse gas emissions and hence to climate change, with potentially catastrophic effects for future generations and for the biosphere. The technologies involved include motor vehicles (at least those with internal combustion engines), steel and aluminum production, home heating and cooling, and the consumption of consumer goods. The energy system is implicated, at least the part of it predicated on carbon-based fuels, and there are other contributors as well such as fertilisers and clearing of forests.

Most of these technologies were not designed to cause harm, and those involved as producers and consumers may not have thought of their culpability for contributing to future damage to the environment and human life. Nevertheless, some individuals have greater roles and responsibilities. For example, many executives in fossil fuel companies and politicians with the power to reset energy priorities have done everything possible to restrain shifting to a sustainable energy economy.

Conceptualising the Technology of Evil

If technologies are implicated in evil, what is the best way to understand the connection? It could be said that an object designed and used for torture embodies evil. Embodiment seems appropriate if the primary purpose is for harm and the main use is for harm, but seldom is this sort of connection exclusive of other uses. A nuclear weapon, for example, might be used as an artwork, a museum exhibit, or a tool to thwart a giant asteroid hurtling towards earth.

Another option is to say that some technologies are “selectively useful” for harming others: they can potentially be useful for a variety of purposes but, for example, easier to use for torture than for brain surgery or keeping babies warm. To talk of selective usefulness instead of embodiment seems less essentialist, more open to multiple interpretations and uses.

Other terms are “abuse” and “misuse.” Think of a cloth covering a person’s face over which water is poured to give a simulation of drowning, used as a method of torture called waterboarding. It seems peculiar to say that the wet cloth embodies evil given that it is only the particular use that makes it a tool to cause harm to humans. “Abuse” and “misuse” have an ignominious history in the study of technology because they are often based on the assumption that technologies are inherently neutral. Nevertheless, these terms might be resurrected in speaking of the connection between technology and evil when referring to technologies that were not designed to cause harm and are seldom used for that purpose.

Consider next the role of technologies in contributing to climate change. For this, it is useful to note that most technologies have multiple uses and consequences. Oil production, for example, has various immediate environmental and health impacts. Oil, as a product, has multitudinous uses, such as heating houses, manufacturing plastics and fuelling military aircraft. The focus here is on a more general impact via the waste product carbon dioxide that contributes to global warming. In this role, it makes little sense to call oil evil in itself.

Instead, it is simply one player in a vast network of human activities that collectively are spoiling the environment and endangering future life on earth. The facilitators of evil in this case are the social and economic systems that maintain dependence on greenhouse gas sources and the psychological processes that enable groups and individuals to resist a shift to sustainable energy systems or to remain indifferent to the issue.

For climate change, and sustainability issues more generally, technologies are implicated as part of entrenched social institutions, practices and beliefs that have the potential to radically alter or destroy the conditions for human and non-human life. One way to speak of technologies in this circumstance is as partners. Another is to refer to them as actors or actants, along the lines of actor-network theory (Latour, 1987), though this gives insufficient salience to the psychological dimensions involved.

Another approach is to refer to technologies as extensions of humans. Marshall McLuhan (1964) famously described media as “extensions of man.” This description points to the way technologies expand human capabilities. Vehicles expand human capacities for movement, otherwise limited to walking and running. Information and communication technologies expand human senses of sight, hearing and speaking. Most relevantly here, weapons expand human capacities for violence, in particular killing and destruction. From this perspective, humans have developed technologies to extend a whole range of capacities, some of them immediately or indirectly harmful.

In social studies of technology, various frameworks have been used, including political economy, innovation, social shaping, cost-benefit analysis and actor-network theory. Each has advantages and disadvantages, but none of the commonly used frameworks emphasises moral evaluation or focuses on the way some technologies are designed or used for the purpose of harming humans and the environment.

Implications

The Pathology of Man is a deeply pessimistic and potentially disturbing book. Probing into the psychological foundations of violence and cruelty shows a side of human behaviour and thinking that is normally avoided. Most commentators prefer to look for signs of hope, and would finish a book such as this with suggestions for creating a better world. Bartlett, though, does not want to offer facile solutions.

Throughout the book, he notes that most people prefer not to examine the sources of human evil, and so he says that hope is actually part of the problem. By continually being hopeful and looking for happy endings, it becomes too easy to avoid looking at the diseased state of the human mind and the systems it has created.

Setting aside hope, nevertheless there are implications that can be derived from Bartlett’s analysis. Here I offer three possible messages regarding technology.

Firstly, if it makes sense to talk about human evil in a non-metaphorical sense, and to trace the origins of evil to features of human psychology, then technologies, as human creations, are necessarily implicated in evil. The implication is that a normative analysis is imperative. If evil is seen as something to be avoided or opposed, then likewise those technologies most closely embodying evil are likewise to be avoided or opposed. This implies making judgements about technologies. In technologies studies, this already occurs to some extent. However, common frameworks, such as political economy, innovation and actor-network theory, do not highlight moral evaluation.

Medical researchers do not hesitate to openly oppose disease, and in fact the overcoming of disease is an implicit foundation of research. Technology studies could more openly condemn certain technologies.

Secondly, if technology is implicated in evil, and if one of the psychological processes perpetuating evil is a lack of recognition of it and concern about it, there is a case for undertaking research that provides insights and tools for challenging the technology of evil. This has not been a theme in technology studies. Activists against torture technologies and military weaponry would be hard pressed to find useful studies or frameworks in the scholarship about technology.

One approach to the technology of evil is action research (McIntyre 2008; Touraine 1981), which involves combining learning with efforts towards social change. For example, research on the torture technology trade could involve trying various techniques to expose the trade, seeing which ones are most fruitful. This would provide insights about torture technologies not available via conventional research techniques.

Thirdly, education could usefully incorporate learning about the moral evaluation of technologies. Bartlett argues that one of the factors facilitating evil is the low moral development of most people, as revealed in the widespread complicity in or complacency about war preparation and wars, and about numerous other damaging activities.

One approach to challenging evil is to increase people’s moral capacities to recognise and act against evil. Technologies provide a convenient means to do this, because human-created objects abound in everyday life, so it can be an intriguing and informative exercise to figure out how a given object relates to killing, hatred, psychological projection and various other actions and ways of thinking involved in violence, cruelty and the destruction of the foundations of life.

No doubt there are many other ways to learn from the analysis of human evil. The most fundamental step is not to turn away but to face the possibility that there may be something deeply wrong with humans as a species, something that has made the species toxic to itself and other life forms. While it is valuable to focus on what is good about humans, to promote good it is also vital to fully grasp the size and depth of the dark side.

Acknowledgements

Thanks to Steven Bartlett, Lyn Carson, Kurtis Hagen, Kelly Moore and Steve Wright for valuable comments on drafts.

Contact details: bmartin@uow.edu.au

References

Ackroyd, Carol, Margolis, Karen, Rosenhead, Jonathan, & Shallice, Tim (1977). The technology of political control. London: Penguin.

Baron-Cohen, Simon (2011). The science of evil: On empathy and the origins of cruelty. New York: Basic Books.

Bartlett, Steven James (2005). The pathology of man: A study of human evil. Springfield, IL: Charles C. Thomas.

Bartlett, Steven James (2011). Normality does not equal mental health: the need to look elsewhere for standards of good psychological health. Santa Barbara, CA: Praeger.

Bartlett, Steven James (2013). The dilemma of abnormality. In Thomas G. Plante (Ed.), Abnormal psychology across the ages, volume 3 (pp. 1–20). Santa Barbara, CA: Praeger.

Baumeister, Roy F. (1997). Evil: Inside human violence and cruelty. New York: Freeman.

Citron, D.K. (2014). Hate crimes in cyberspace. Cambridge, MA: Harvard University Press.

Gregory, Martyn (director and producer). (1995). The torture trail [television]. UK: TVF.

Jasper, James M. (1997). The art of moral protest: Culture, biography, and creativity in social movements. Chicago: University of Chicago Press.

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.

Martin, Brian (1984). Uprooting war. London: Freedom Press.

Martin, Brian (2001). Technology for nonviolent struggle. London: War Resisters’ International.

Martin, Brian (2007). Justice ignited: The dynamics of backfire. Lanham, MD: Rowman & Littlefield.

Martin, Brian (2009). Managing outrage over genocide: case study Rwanda. Global Change, Peace & Security, 21(3), 275–290.

McIntyre, Alice (2008). Participatory action research. Thousand Oaks, CA: Sage.

McLuhan, Marshall (1964). Understanding media: The extensions of man. New York: New American Library.

Milgram, Stanley (1974). Obedience to authority. New York: Harper & Row.

Omega Foundation (2000). Crowd control technologies. Luxembourg: European Parliament.

Peck, M. Scott (1988). People of the lie: The hope for healing human evil. London: Rider.

Proctor, Robert N. (2011). Golden holocaust: Origins of the cigarette catastrophe and the case for abolition. Berkeley, CA: University of California Press.

Schmidt, Jeff (2000). Disciplined minds: A critical look at salaried professionals and the soul-battering system that shapes their lives. Lanham, MD: Rowman & Littlefield.

Touraine, Alain (1981). The voice and the eye: An analysis of social movements. Cambridge: Cambridge University Press.

Wittner, Lawrence S. (1993–2003). The struggle against the bomb, 3 volumes. Stanford, CA: Stanford University Press.

Wright, Steve (1998). An appraisal of technologies of political control. Luxembourg: European Parliament.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “Human Nature in the Post-Truth Age.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 36-38.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45C

Image by Bryan Ledgard via Flickr / Creative Commons

 

We have come a long way since Leslie Stevenson published Seven Theories of Human Nature in 1974. Indeed, Stevenson’s critical contribution enlisted the views of Plato, Christianity, Marx, Freud, Sartre, Skinner, and Lorenz to analyze and historically contextualize what the term could mean.

By 2017, a seventh edition is available, now titled Thirteen Theories of Human Nature, and it contains chapters on Confucianism, Hinduism, Buddhism, Plato, Aristotle, the Bible (instead of Christianity), Islam, Kant, Marx, Freud, Sartre, Darwinism, and feminism (with the help of David Haberman, Peter Matthews, and Charlotte Witt). One wonders how many more theories or views can be added to this laundry list; perhaps with an ever-increasing list of contributors to the analysis and understanding of human nature, a new approach might be warranted.

How The Question Is Contested Today

This is where Maria Kronfeldner’s What’s Left of Human Nature? A Post-Essentialist, Pluralist, and Interactive Account of a Contested Concept (2018) enters the scene. This scene, to be sure, is fraught with sexism and misogyny, speciesism and racism, and an unfortunately long history of eugenics around the world. The recent white supremacist eruptions under president Trump’s protection if not outright endorsement are so worrisome that any level-headed (or Kronfeldner’s analytic) guidance is a breath of fresh air, perhaps an essential disinfectant.

Instead of following the rhetorical vitriol of right-wing journalists and broadcasters or the lame argumentations of well-meaning but ill-informed sociobiologists, we are driven down a philosophical path that is scholarly, fair-minded, and comprehensive. If one were to ask a naïve or serious question about human nature, this book is the useful, if at time analytically demanding, source for an answer.

If one were to encounter the prevailing ignorance of politicians and television or radio pundits, this book is the proper toolkit from which to draw sharp tools with which to dismantle unfounded claims and misguided pronouncements. In short, in Trump’s post-truth age this book is indispensable.

But who really cares about human nature? Why should we even bother to dissect the intricacies of this admittedly “contested concept” rather than dispense with it altogether? Years ago, I confronted Robert Rubin (former Goldman Sachs executive and later Treasury Secretary in the Clinton Administration) in a lecture he gave after retirement about financial policies and markets. I asked him directly about his view of human nature and his response was brief: fear and greed.

I tried to push him on this “view” and realized, once he refused to engage, that this wasn’t a view but an assumption, a deep presupposition that informed his policy making, that influenced everything he thought was useful and even morally justifiable (for a private investment bank or the country as a whole). All too often we scratch our heads in wonder about a certain policy that makes no sense or that is inconsistent with other policies (or principles) only to realize that a certain pre-commitment (in this sense, a prejudice) accompanies the proposed policy.

Would making explicit presuppositions about human nature clarify the policy or at least its rationale? I think it would, and therefore I find Kronfeldner’s book fascinating, well-argued, and hopefully helpful outside insulated academic circles. Not only can it enlighten the boors, but it could also make critical contributions to debates over all things trans (transhumanism, transgenderism).

Is the Concept of Essence Useful Anymore?

In arguing for a post-essentialist, pluralist, and interactive account of human nature, Kronfeldner argues for eliminating the “concept of an essence,” broadening its conceptual reach with corresponding “three different kinds” of human nature, and that “nature and culture interact at the developmental, epigenetic, and evolutionary levels” as well as the ongoing “explanatory looping effects” of human nature. (xv)

Distinguishing between explaining human nature and human nature, the author has chosen to focus on the latter “which is an analytic and reflective issue about what ‘having a nature’ and ‘something being due to nature’ mean.” (xvi) Instead of summarizing the intricacies of all the arguments offered in the book, suffice here to highlight, from the very beginning of the book, one of the author’s cautionary remarks: “Many consider the concept of human nature to be obsolete because they cannot envision such an interactive account of the fixity aspect. It is one of the major contributions of this book to try to overcome this obstacle.” (xvii)

And indeed, this book does overcome the simple binary of either there are fixed traits of humanity to which we must pay scientific tribute or there are fluid feedback loops of influence between nature and nurture to which we must pay social and moral attention. Though the former side of the binary is wedded to notions of “specificity, typicality, fixity, and normalcy” for all the right ethical reasons of protecting human rights and equal treatment, the price paid for such (linguistic and epistemic) attachment may be too high.

The price, to which Kronfeldner returns in every chapter of the book, is “dehumanization”—the abuse of the term (and concept) human nature in order to exclude rather than include members of the human species.

In her “eliminativist perspective” with respect to the concept of human nature, Kronfeldner makes five claims which she defends brilliantly and carefully throughout the book. The first relates to how little the “sciences” would lose from not using the term anymore; the second is that getting rid of essentialism alone will not do away with dehumanization; the third suggests that though dehumanization may not be eliminated, post-essentialism will be helpful to “minimize” it; the fourth claim is that “the question about elimination versus revision of the terminology used is actually a matter of values (rather than facts)”; and the fifth claim relates to the “precautionary principle” advocated here. (231)

The upshot of this process of elimination in the name of reducing dehumanization is admittedly as much political as epistemic, social and cultural as moral. As Kronfeldner says: “Even if one gets rid of all possible essentialist baggage attached to human nature talk, and even if one gets rid of all human nature talk whatsoever, there is no way to make sure that the concept of being or becoming human gets rid of dehumanization. Stripping off essentialism and the language inherited from it won’t suffice for that.” (236) So, what will suffice?

Throwing the Ladder Away

At this juncture, Kronfeldner refers to Wittgenstein: “The term human nature might well be a Wittgensteinian ladder: a ladder that we needed to arrive where we are (in our dialectic project) but that we can now throw away.” (240) This means, in short, that “we should stop using the term human nature whenever possible.” (242) Easier said than done?

The point that Kronfeldner makes repeatedly is that simply revising the term or using a different one will not suffice; replacing one term with another or redefining the term more carefully will not do. This is not only because of the terminological “baggage” to which she alludes, but perhaps, more importantly, because this concept or term has been a crutch scientists and policy makers cannot do without. Some sense of human nature informs their thinking and their research, their writing and policy recommendations (as my example above illustrates).

In a word, is it possible to avoid asking: what are they thinking about when they think of human conduct? What underlying presuppositions do they bring to their respective (subconscious?) ways of thinking? As much as we may want to refrain from talking about human nature as an outdated term or a pernicious concept that has been weaponized all too often in a colonial or racist modality, it seems to never be far away from our mind.

In the Trumpist age of white supremacy and the fascist trajectories of European nationalism, can we afford to ignore talk about human nature? Worst, can we ignore the deliberate lack of talk of human nature, seeing, as we do, its dehumanizing effects? With these questions in mind, I highly recommend spending some time with this book, ponderous as it may seem at times, and crystal clear as it is at others. It should be considered for background information by social scientists, philosophers, and politicians.

Contact details: rsassowe@uccs.edu

References

Kronfeldner, Maria. What’s Left of Human Nature? A Post-Essentialist, Pluralist, and Interactive Account of a Contested Concept. Boston: MIT Press, 2018.

Author Information: Frank Scalambrino, Duquesne University, franklscalambrino@gmail.com.

Scalambrino, Frank. “Reviewing Nolen Gertz’s Nihilism and Technology.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 22-28.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44B

Image by Jinx! via Flickr / Creative Commons

 

There are three (3) parts to this review, each of which brings a philosophical, and/or structural, issue regarding Dr. Gertz’s book into critical focus.

1) His characterization of “nihilism.”

a) This is specifically about Nietzsche.

2) His (lack of) characterization of the anti- and post-humanist positions in philosophy of technology.

a) Importantly, this should also change what he says about Marx.

3) In light of the above two changes, going forward, he should (re)consider the way he frames his “human-nihilism relations”

1) Consider that: If his characterization of nihilism in Nietzsche as “Who cares?” were correct, then Nietzsche would not have been able to say that Christianity is nihilistic (cf. The Anti-Christ §§6-7; cf. The Will to Power §247). The following organizes a range of ways he could correct this, from the most to least pervasive.

1a) He could completely drop the term “nihilism.” Ultimately, I think the term that fits best with his project, as it stands, is “decadence.” (More on this below.) In §43 of The Will to Power, Nietzsche explained that “Nihilism is not a cause, but only the rationale of decadence.”

1b) He could keep the term “nihilism” on the cover, but re-work the text to reflect technology as decadence, and then frame decadence as indicating a kind of nihilism (to justify keeping nihilism on the cover).

1c) He could keep everything as is; however, as will be clear below, his conception of nihilism and human-nihilism relations leaves him open to two counter-arguments which – as I see it – are devastating to his project. The first suggests that from the point of view of Nietzsche’s actual definition of “nihilism,” his theory itself is nihilistic. The second suggests that (from a post-human point of view) the ethical suggestions he makes (based on his revelation of human-nihilism relations) are “empty threats” in that the “de-humanization” of which he warns refers to a non-entity.

Lastly, I strongly suggest anyone interested in “nihilism” in Nietzsche consult both Heidegger (1987) and Deleuze (2006).

1. Gertz’s Characterization of “Nihilism”

Nietzsche’s writings are notoriously difficult to interpret. Of course, this is not the place to provide a “How to Read Nietzsche.” However, Dr. Gertz’s approach to reading Nietzsche is peculiar enough to warrant the following remarks about the difficulties involved. When approaching Nietzsche you should ask three questions: (1) Do you believe Nietzsche’s writings are wholly coherent, partially coherent, or not coherent at all? (2) Do you believe Nietzsche’s writings are wholly consistent, partially consistent, or not consistent at all? (3) Does Nietzsche’s being consistent make a “system” out of his philosophy?

The first question is important because you may believe that Nietzsche was a “madman.” And, the fallacy of ad hominem aside, you may believe his “madness” somehow invalidates what he said – either partially or totally. Further, it is clear that Nietzsche does not endorse a philosophy which considers rationality the most important aspect of being human. Thus, it may be possible to consider Nietzsche’s writings as purposeful or inspired incoherence.

For example, this latter point of view may find support in Nietzsche’s letters, and is exemplified by Blanchot’s comment: “The fundamental characteristic of Nietzsche’s truth is that it can only be misunderstood, can only be the object of an endless misunderstanding.” (1995: 299).

The second question is important because across Nietzsche’s writings he seemingly contradicts himself or changes his philosophical position. There are two main issues, then, regarding consistency. On the one hand, “distinct periods” of philosophy have been associated with various groupings of Nietzsche’s writings, and establishing these periods – along with affirming position changes – can be supported by Nietzsche’s own words (so long as one considers those statements coherent).

Thus, according to the standard division, we have the “Early Writings” from 1872-1876, the “Middle Writings” from 1878-1882, the “Later Writings” from 1883-1887, and the “Final Writings” of 1888. By examining Dr. Gertz’s Bibliography it is clear that he privileges the “Later” and “Unpublished” of Nietzsche’s writings. On the other hand, as William H. Schaberg convincingly argued in his The Nietzsche Canon: A Publication History and Bibliography, despite all of the “inconsistencies,” from beginning to end, Nietzsche’s writings represent the development of what he called the “Dionysian Worldview.” Importantly, Dr. Gertz neither addresses these exegetical issues nor does he even mention Dionysus.

The third question is important because throughout the last century of Nietzsche scholarship there have been various trends regarding the above, first two, questions, and often the “consistency” and “anti-system” issues have been conflated. Thus, scholars in the past have argued that Nietzsche must be inconsistent – if not incoherent – because he is purposefully an “anti-systematic thinker.”

However, as Schaberg’s work, among others, makes clear: To have a consistent theme does not necessitate that one’s work is “systematic.” For example, it is not the case that all philosophers are “systematic” philosophers merely because they consistently write about philosophy. That the “Dionysian Worldview” is ultimately Nietzsche’s consistent theme is not negated by any inconsistencies regarding how to best characterize that worldview.

Thus, I would be interested to know the process through which Dr. Gertz decided on the title of this book. On the one hand, it is clear that he considers this a book that combines Nietzsche and philosophy of technology. On the other hand, Dr. Gertz’s allegiance to (the unfortunately titled) “postphenomenology” and the way he takes up Nietzsche’s ideas make the title of his book problematic. For instance, the title of the first section of Chapter 2 is: “What is Nihilism?”

What About the Meaning of Nihilism?

Dr. Gertz notes that because the meaning of “nihilism” in the writings of Nietzsche is controversial, he will not even attempt to define nihilism in terms of Nietzsche’s writings (p. 13). He then, without referencing any philosopher at all, defines “nihilism” stating: “in everyday usage it is taken to mean something roughly equivalent to the expression ‘Who cares?’” (p. 13). Lastly, in the next section he uses Jean-Paul Sartre to characterize nihilism as “bad faith.” All this is problematic.

First, is this book about “nihilism” or “bad faith”? It seems to be about the latter, which (more on this to come) leads one to wonder whether the title and the supposed (at times forced) use of Nietzsche were not a (nihilistic?) marketing-ploy. Second, though Dr. Gertz doesn’t think it necessary to articulate and defend the meaning of “nihilism” in Nietzsche, just a casual glance at the same section of the “Unpublished Writings” (The Will to Power) that Gertz invokes can be used to argue against his characterization of “nihilism” as “Who cares?”

For example, Nietzsche is far more hardcore than “Who cares?” as evidenced by: “Nihilism does not only contemplate the ‘in vain!’ nor is it merely the belief that everything deserves to perish: one helps to destroy… [emphasis added]” (1968b: 18). “Nihilism” pertains to moral value. It is in this context that Nietzsche is a so-called “immoralist.”

Nietzsche came to see the will as, pun intended, beyond good and evil. It is moralizing that leads to nihilism. Consider the following from Nietzsche:

“Schopenhauer interpreted high intellectuality as liberation from the will; he did not want to see the freedom from moral prejudice which is part of the emancipation of the great spirit… Fundamental instinctive principle of all philosophers and historians and psychologists: everything of value in man, art, history, science, religion, technology [emphasis added], must be proved to be of moral value, morally conditioned, in aim, means and outcome… ‘Does man become better through it?’” (1968b: pp. 205-6).

The will is free, beyond all moral values, and so the desire to domesticate it is nihilistic – if for no reason other than in domesticating it one has lowered the sovereignty of the will into conformity with some set of rules designed for the preservation of the herd (or academic-cartel). Incidentally, I invoked this Nietzschean point in my chapter: “What Control? Life at the limits of power expression” in our book Social Epistemology and Technology. Moreover, none of us “philosophers of the future” have yet expressed this point in a way that surpasses the excellence and eloquence of Baudrillard (cf. The Perfect Crime and The Agony of Power).

In other words, what is in play are power differentials. Thus, oddly, as soon as Dr. Gertz begins moralizing by denouncing technology as “nihilistic,” he reveals himself – not technology – to be nihilistic. For all these reasons, and more, it is not clear why Dr. Gertz insists on the term “nihilism” or precisely how he sees this as Nietzsche’s position.

To be sure, the most recent data from the CDC indicate that chlamydia, gonorrhea, and syphilis are presently at an all-time high; do you think this has nothing to do with the technological mediation of our social relations? Yet, the problem of bringing in Nietzsche’s conception of “nihilism” is that Nietzsche might not see this as a problem at all. On the one hand, we have all heard the story that Nietzsche knew he had syphilis; yet, he supposedly refused to seek treatment, and subsequently died from it.

On the other hand, at times it seems as though the Nietzschean term Dr. Gertz could have used would have been “decadence.” Thus, the problem with technology is that it is motivated by decadence and breeds decadence. Ultimately, the problem is that – despite the nowadays obligatory affirmation of the “non-binary” nature of whatever we happen to be talking about – Dr. Gertz frames his conception in terms of the bifurcation: technophile v. technophobe. Yet, Nietzsche is, of course, a transcendental philosopher, so there are three (not 2) positions. The third position is Amor Fati.

The ‘predominance of suffering over pleasure’ or the opposite (hedonism): these two doctrines are already signposts to nihilism… that is how a kind of man speaks who no longer dares to posit a will, a purpose, a meaning: for any healthier kind of man the value of life is certainly not measured by the standard of these trifles [pleasure and pain]. And suffering might predominate, and in spite of that a powerful will might exist, a Yes to life, a need for this predominance. (Nietzsche, 1968b: p. 23).

In terms of philosophy of technology, if it is our fate to exist in a world torn asunder by technological mediation, well, then, love it (in this wise, even the “Death of God” can be celebrated). And, here would be the place to mention “postmodern irony,” which Dr. Gertz does not consider. In sum, Dr. Gertz’s use of the term “nihilism” is, to say the least, problematic.

Technology’s Disconnect From Nietzsche Himself

Nietzsche infamously never used a typewriter. It was invented during his lifetime, and, as the story goes, he supposedly tried to use the technology but couldn’t get the hang of it, so he went back to writing by hand. This story points to an insight that it seems Dr. Gertz’s book doesn’t consider. For Nietzsche human existence is the point of departure, not technology.

So, the very idea that technological mediation will lead to a better existence (even if “better” only means “more efficient,” as it could in the case of the typewriter), should, according to Nietzsche’s actual logic of “nihilism,” see the desire to use a typewriter as either a symptom of decadence or an expression of strength; however, these options do not manifest in the logic of Gertz’s Nietzsche analysis.

Rather, Dr. Gertz moralizes the use of technology: “Working out which of these perspectives is correct is thus vital for ensuring that technologies are providing us leisure as a form of liberation rather than providing us leisure as a form of dehumanization.” (p. 4). Does the “Who cares?” logic of Gertz’s “nihilism” necessarily lead to an interpretation of Nietzsche as a kind of “Luddite”?

Before moving on to the next part of this review, a few last remarks about how Dr. Gertz uses Nietzsche’s writings are called for. There are nine (9) chapters in Nihilism and Technology. Dr. Gertz primarily uses the first two chapters to speak to the terminology he will use throughout the book. He uses the third chapter to align himself with the academic-cartel, and the remaining chapters are supposed to illustrate his explication of what he calls Nietzsche’s five “human-nihilism relations.” All of these so-called “human-nihilism relations” revolve around discussions which take place only in the “Third Essay” of Nietzsche’s On the Genealogy of Morals – except one foray into The Gay Science.

Two points should be made here. First, Dr. Gertz calls these “nihilism relations,” but they are really just examples of “Slave Mentality.” This should come as no surprise to those familiar with Nietzsche because of where in his writings Dr. Gertz is focused. Moreover, there is not enough space here to fully explain why, but it is problematic to simply replace the term “Slave Mentality” with “nihilism relation.”

Second, among these “nihilism relations” there are two glaring misappropriations of Nietzsche’s writings regarding “pity” and “divinity.” That is, when Dr. Gertz equates “pity sex” (i.e. having “sexual intercourse,” of one kind or another, with someone ostensibly because you “pity” them) with Nietzsche’s famous discussion of pity in On the Genealogy of Morals, it both overlooks Nietzsche’s comments regarding “Master” pity and trivializes the notion of “pity” in Nietzsche.

For, as already noted above, if in your day to day practice of life you remain oriented to the belief that you need an excuse for whatever you do, then you are moralizing. (Remember when we used to think that Nietzsche was “dangerous”?) If you are moralizing, then you’re a nihilist. You’re a nihilist because you believe there is a world that is better than the one that exists. You believe in a world that is nothing. “Conclusion: The faith in the categories of reason is the cause of nihilism. We have measured the value of the world according to categories that refer to a purely fictitious world.” (Nietzsche, 1968b: p. 13).

Lastly, Dr. Gertz notes: “Google stands as proof that humans do not need gods, that humans are capable of fulfilling the role once reserved for the gods.” (p. 199). However, in making that statement he neither accurately speaks of the gods, in general, nor of Nietzsche’s understanding of – for example – Dionysus.

2) The Anti- and Post-Humanist Positions in Philosophy of Technology

In a footnote Dr. Gertz thanks an “anonymous reviewer” for telling him to clarify his position regarding humanism, transhumanism, and posthumanism; however, despite what sounds like his acknowledgement, he does not provide such a clarification. The idea is supposed to be that transhumanism is a kind of humanism, and anti- and post-humanism are philosophies which deny that “human” refers to a “natural category.” It is for this reason that many scholars talk of “two Marxisms.” That is to say, there is the earlier Marxism which takes “human” as a natural category and aims at liberation, and there is the later Marxism which takes “human” to be category constructed by Capital.

It is from this latter idea that the “care for the self” is criticized as something to be sold to “the worker” and to eventually transform the worker’s work into the work of consumption – this secures perpetual demand, as “the worker” is transformed into the “consumer.” Moreover, this is absolutely of central importance in the philosophy of technology. For, from a point of view that is truly post-human, Dr. Gertz’s moralizing-warning that technology may lead to “a form of dehumanization.” (p. 4) is an empty threat.

On the one hand, this fidelity to “human” as a natural category comes from Don Ihde’s “postphenomenology.” For Gertz’s idea of “human-nihilism relations” was developed from Idhe’s “human-technology relations.” (p. 45). Gertz notes, “Ihde turns Heidegger’s analysis of hammering into an exemplar of how to carry out analyses of human-technology relations, analyses which lead Ihde to expand the field of human-technology relations beyond Heidegger’s examples” (p. 49).

However, there are two significant problems here, both of which point back, again, to the lack of clarification regarding post-humanism. First, Heidegger speaks of Dasein and of Being, not of “human.” Similarly, Nietzsche could say, “The will to overcome an affect is ultimately only the will of another affect, or of several other affects.” (Nietzsche, 1989a: §117), or “There is no ‘being’ behind doing … the ‘doer’ is merely a fiction added to the deed – the deed is everything.” (Nietzsche, 1989b: p. 45).

Second, the section of Being & Time from which “postphenomenology” develops its relations of “co-constitution” is “The Worldhood of the World,” not “Being-in-the-World.” In other words, Dasein is not an aspect of “ready-to-hand” hammering, the ready-to-hand is an aspect of Dasein. Thus, “human” may be seen as a “worldly” “present-at-hand” projection of an “in order to.” Again, this is also why Gertz doesn’t characterize Marxism (p. 5) as “two Marxisms,” namely he does not consider the anti- or post-humanist readings of Marx.

Hence, the importance of clarifying the incommensurability between humanism and post-humanism: Gertz’s characterization of technology as nihilistic due to its de-humanizing may turn out to be itself nihilistic in terms of its moralizing (noted in Part I, above) and in terms of its taking the fictional-rational category “human” as more primordial than the (according to Nietzsche) non-discursive sovereign will.

3) His “human-nihilism relations”

Students of the philosophy of technology will find the Chapter 3 discussion of Ihde’s work helpful; going forward, we should inquire regarding Ihde’s four categories – in the context of post-humanism and cybernetics – if they are exhaustive. Moreover, how might each of these categories look from a point of view which takes the fundamental alteration of (human) be-ing by technology to be desirable?

This is a difficult question to navigate because it shifts the context for understanding Gertz’s philic/phobic dichotomy away from “care for the self” and toward a context of “evolutionary selection.” Might public self-awareness, in such a context, influence the evolutionary selection?

So long as one is explicitly taking a stand for humanism, then one could argue that the matrix of human-technology relations are symptoms of decadence. Interestingly, such a stance may make Nihilism and Technology, first and foremost, an ethics book and not a philosophy of technology book. Yet, especially, though perhaps not exclusively, presenting only the humanistic point of view leaves one open to the counter-argument that the “intellectual” and “philosophical” relations to “technology” that allow for such an analysis into these various discursive identities betrays a kind of decadence. It would not be much of a stretch to come to the conclusion that Nietzsche would consider “academics” decadent.

Further, it would also be helpful for philosophy of technology students to consider – from a humanistic point of view – the use of technology to extend human life in light of “human-decadence relations.” Of course, whether or not these relations, in general, lead to nihilism is a separate question. However, the people who profit from the decadence on which these technologies stand will rhetorically-bulwark the implementation of their technological procedures in terms of “saving lives.” Here, Nietzsche was again prophetic, as he explicitly considered a philosophy of “survive at all costs” to be a sign of degeneracy and decay.

Contact details: franklscalambrino@gmail.com

References

Blanchot, Maurice. (1995). The Work of Fire. C. Mandell (Trans.). Stanford, CA: Stanford University Press.

Deleuze, Gilles. (2006). Nietzsche and Philosophy. H. Tomlinson (Trans.). New York: Columbia University.

Heidegger, Martin. (1987). D.F. Krell (Ed.). Nietzsche, Vol. IV: Nihilism. F.A. Capuzzi (Trans.). New York: Harper & Row.

Nietzsche, Friedrich. (1989a). Beyond Good and Evil: Prelude to a Philosophy of the Future. W. Kaufmann (Trans.). New York: Vintage.

_____. (1989b). On the Genealogy of Morals /Ecce Homo. W. Kaufmann (Trans.). New York: Vintage Books.

_____. (1968a). Twilight of the Idols/The Anti-Christ. R.J. Hollingdale (Trans.). Middlesex, England: Penguin Books.

_____. (1968b). The Will to Power. W. Kaufmann and R.J. Hollingdale (Trans.). New York: Vintage Books.

Schaberg, William H. (1995). The Nietzsche Canon: A Publication History and Bibliography. Chicago: University of Chicago Press.

Author Information: Luca Tateo, Aalborg University & Federal University of Bahia, luca@hum.aau.dk.

Tateo, Luca. “Ethics, Cogenetic Logic, and the Foundation of Meaning.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44i

Mural entitled “Paseo de Humanidad” on the Mexican side of the US border wall in the city of Heroica Nogales, in Sonora. Art by Alberto Morackis, Alfred Quiróz and Guadalupe Serrano.
Image by Jonathan McIntosh, via Flickr / Creative Commons

 

This essay is in reply to: Miika Vähämaa (2018) Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

In his interesting essay, Vähämaa (2018) discusses two issues that I find particularly relevant. The first one concerns the foundation of meaning in language, which in the era of connectivism (Siemens, 2005) and post-truth (Keyes, 2004) becomes problematic. The second issue is the appreciation of epistemic virtues in a collective context: how the group can enhance the epistemic skill of the individual?

I will try to explain why these problems are relevant and why it is worth developing Vähämaa’s (2018) reflection in the specific direction of group and person as complementary epistemic and ethic agents (Fricker, 2007). First, I will discuss the foundations of meaning in different theories of language. Then, I will discuss the problems related to the stability and liminality of meaning in the society of “popularity”. Finally I will propose the idea that the range of contemporary epistemic virtues should be integrated by an ethical grounding of meaning and a co-genetic foundation of meaning.

The Foundation of Meaning in Language

The theories about the origins of human language can be grouped in four main categories, based on the elements characterizing the ontogenesis and glottogenesis.

Sociogenesis Hypothesis (SH): it is the idea that language is a conventional product, that historically originates from coordinated social activities and it is ontogenetically internalized through individual participation to social interactions. The characteristic authors in SH are Wundt, Wittgenstein and Vygotsky (2012).

Praxogenesis Hypothesis (PH): it is the idea that language historically originates from praxis and coordinated actions. Ontogenetically, the language emerges from senso-motory coordination (e.g. gaze coordination). It is for instance the position of Mead, the idea of linguistic primes in Smedslund (Vähämaa, 2018) and the language as action theory of Austin (1975).

Phylogenesis Hypothesis (PhH): it is the idea that humans have been provided by evolution with an innate “language device”, emerging from the evolutionary preference for forming social groups of hunters and collective long-duration spring care (Bouchard, 2013). Ontogenetically, language predisposition is wired in the brain and develops in the maturation in social groups. This position is represented by evolutionary psychology and by innatism such as Chomsky’s linguistics.

Structure Hypothesis (StH): it is the idea that human language is a more or less logic system, in which the elements are determined by reciprocal systemic relationships, partly conventional and partly ontic (Thao, 2012). This hypothesis is not really concerned with ontogenesis, rather with formal features of symbolic systems of distinctions. It is for instance the classical idea of Saussure and of the structuralists like Derrida.

According to Vähämaa (2018), every theory of meaning has to deal today with the problem of a terrific change in the way common sense knowledge is produced, circulated and modified in collective activities. Meaning needs some stability in order to be of collective utility. Moreover, meaning needs some validation to become stable.

The PhH solves this problem with a simple idea: if humans have survived and evolved, their evolutionary strategy about meaning is successful. In a natural “hostile” environment, our ancestors must have find the way to communicate in such a way that a danger would be understood in the same way by all the group members and under different conditions, including when the danger is not actually present, like in bonfire tales or myths.

The PhH becomes problematic when we consider the post-truth era. What would be the evolutionary advantage to deconstruct the environmental foundations of meaning, even in a virtual environment? For instance, what would be the evolutionary advantage of the common sense belief that global warming is not a reality, considered that this false belief could bring mankind to the extinction?

StH leads to the view of meaning as a configuration of formal conditions. Thus, stability is guaranteed by structural relations of the linguistic system, rather than by the contribution of groups or individuals as epistemic agents. StH cannot account for the rapidity and liminality of meaning that Vähämaa (2018) attributes to common sense nowadays. SH and PH share the idea that meaning emerges from what people do together, and that stability is both the condition and the product of the fact that we establish contexts of meaningful actions, ways of doing things in a habitual way.

The problem is today the fact that our accelerated Western capitalistic societies have multiplied the ways of doing and the number of groups in society, decoupling the habitual from the common sense meaning. New habits, new words, personal actions and meanings are built, disseminated and destroyed in short time. So, if “Our lives, with regard to language and knowledge, are fundamentally bound to social groups” (Vähämaa, 2018, p. 169) what does it happen to language and to knowledge when social groups multiply, segregate and disappear in a short time?

From Common Sense to the Bubble

The grounding of meaning in the group as epistemic agent has received a serious stroke in the era of connectivism and post-truth. The idea of connectivism is that knowledge is distributed among the different agents of a collective network (Siemens, 2005). Knowledge does not reside into the “mind” or into a “memory”, but is rather produced in bits and pieces, that the epistemic agent is required to search, and to assemble through the contribution of the collective effort of the group’s members.

Thus, depending on the configuration of the network, different information will be connected, and different pictures of the world will emerge. The meaning of the words will be different if, for instance, the network of information is aggregated by different groups in combination with, for instance, specific algorithms. The configuration of groups, mediated by social media, as in the case of contemporary politics (Lewandowsky, Ecker & Cook, 2017), leads to the reproduction of “bubbles” of people that share the very same views, and are exposed to the very same opinions, selected by an algorithm that will show only the content compliant with their previous content preferences.

The result is that the group loses a great deal of its epistemic capability, which Vähämaa (2018) suggests as a foundation of meaning. The meaning of words that will be preferred in this kind of epistemic bubble is the result of two operations of selection that are based on popularity. First, the meaning will be aggregated by consensual agents, rather than dialectic ones. Meaning will always convergent rather than controversial.

Second, between alternative meanings, the most “popular” will be chosen, rather than the most reliable. The epistemic bubble of connectivism originates from a misunderstanding. The idea is that a collectivity has more epistemic force than the individual alone, to the extent that any belief is scrutinized democratically and that if every agent can contribute with its own bit, the knowledge will be more reliable, because it is the result of a constant and massive peer-review. Unfortunately, the events show us a different picture.

Post-truth is actually a massive action of epistemic injustice (Fricker, 2007), to the extent that the reliability of the other as epistemic agent is based on criteria of similarity, rather than on dialectic. One is reliable as long as it is located within my own bubble. Everything outside is “fake news”. The algorithmic selection of information contributes to reinforce the polarization. Thus, no hybridization becomes possible, the common sense (Vähämaa, 2018) is reduced to the common bubble. How can the epistemic community still be a source of meaning in the connectivist era?

Meaning and Common Sense

SH and PH about language point to a very important historical source: the philosopher Giambattista Vico (Danesi, 1993; Tateo, 2015). Vico can be considered the scholar of the common sense and the imagination (Tateo, 2015). Knowledge is built as product of human experience and crystallized into the language of a given civilization. Civilization is the set of interpretations and solutions that different groups have found to respond to the common existential events, such as birth, death, mating, natural phenomena, etc.

According to Vico, all the human beings share a fate of mortal existence and rely on each other to get along. This is the notion of common sense: the profound sense of humanity that we all share and that constitutes the ground for human ethical choices, wisdom and collective living. Humans rely on imagination, before reason, to project themselves into others and into the world, in order to understand them both. Imagination is the first step towards the understanding of the Otherness.

When humans loose contact with this sensus communis, the shared sense of humanity, and start building their meaning on egoism or on pure rationality, civilizations then slip into barbarism. Imagination gives thus access to the intersubjectivity, the capability of feeling the other, while common sense constitutes the wisdom of developing ethical beliefs that will not harm the other. Vico ideas are echoed and made present by the critical theory:

“We have no doubt (…) that freedom in society is inseparable from enlightenment thinking. We believe we have perceived with equal clarity, however, that the very concept of that thinking (…) already contains the germ of the regression which is taking place everywhere today. If enlightenment does not [engage in] reflection on this regressive moment, it seals its own fate (…) In the mysterious willingness of the technologically educated masses to fall under the spell of any despotism, in its self-destructive affinity to nationalist paranoia (…) the weakness of contemporary theoretical understanding is evident.” (Horkheimer & Adorno, 2002, xvi)

Common sense is the basis for the wisdom, that allows to question the foundational nature of the bubble. It is the basis to understand that every meaning is not only defined in a positive way, but is also defined by its complementary opposite (Tateo, 2016).

When one uses the semantic prime “we” (Vähämaa, 2018), one immediately produces a system of meaning that implies the existence of a “non-we”, one is producing otherness. In return, the meaning of “we” can only be clearly defined through the clarification of who is “non-we”. Meaning is always cogenetic (Tateo, 2015). Without the capability to understand that by saying “we” people construct a cogenetic complex of meaning, the group is reduced to a self confirming, self reinforcing collective, in which the sense of being a valid epistemic agent is actually faked, because it is nothing but an act of epistemic arrogance.

How we can solve the problem of the epistemic bubble and give to the relationship between group and person a real epistemic value? How we can overcome the dangerous overlapping between sense of being functional in the group and false beliefs based on popularity?

Complementarity Between Meaning and Sense

My idea is that we must look in that complex space between the “meaning”, understood as a collectively shared complex of socially constructed significations, and the “sense”, understood as the very personal elaboration of meaning which is based on the person’s uniqueness (Vygotsky, 2012; Wertsck, 2000). Meaning and sense feed into each other, like common sense and imagination. Imagination is the psychic function that enables the person to feel into the other, and thus to establish the ethical and affective ground for the common sense wisdom. It is the empathic movement on which Kant will later on look for a logic foundation.

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” (Kant 1993, p. 36. 4:429)

I would further claim that maybe they feed into each other: the logic foundation is made possible by the synthetic power of empathic imagination. Meaning and sense feed into each other. On the one hand, the collective is the origin of internalized psychic activities (SH), and thus the basis for the sense elaborated about one’s own unique life experience. On the other hand, the personal sense constitutes the basis for the externalization of the meaning into the arena of the collective activities, constantly innovating the meaning of the words.

So, personal sense can be a strong antidote to the prevailing force of the meaning produced for instance in the epistemic bubble. My sense of what is “ought”, “empathic”, “human” and “ethic”, in other words my wisdom, can help me to develop a critical stance towards meanings that are build in a self-feeding uncritical way.

Can the dialectic, complementary and cogenetic relationship between sense and meaning become the ground for a better epistemic performance, and for an appreciation of the liminal meaning produced in contemporary societies? In the last section, I will try to provide arguments in favor of this idea.

Ethical Grounding of Meaning

If connectivistic and post-truth societies produce meanings that are based on popularity check, rather than on epistemic appreciation, we risk to have a situation in which any belief is the contingent result of a collective epistemic agent which replicates its patterns into bubbles. One will just listen to messages that confirm her own preferences and belief and reject the different ones as unreliable. Inside the bubble there is no way to check the meaning, because the meaning is not cogenetic, it is consensual.

For instance, if I read and share a post on social media, claiming that migrants are the main criminal population, despite my initial position toward the news, there is the possibility that within my group I will start to see only posts confirming the initial fact. The fact can be proven wrong, for instance by the press, but the belief will be hard to change, as the meaning of “migrant” in my bubble is likely to continue being that of “criminal”. The collectivity will share an epistemically unjust position, to the extent that it will attribute a lessened epistemic capability to those who are not part of the group itself. How can one avoid that the group is scaffolding the “bad” epistemic skills, rather than empowering the individual (Vähämaa, 2018)?

The solution I propose is to develop an epistemic virtue based on two main principles: the ethical grounding of meaning and the cogenetic logic. The ethical grounding of meaning is directly related to the articulation between common sense and wisdom in the sense of Vico (Tateo, 2015). In a post-truth world in which we cannot appreciate the epistemic foundation of meaning, we must rely on a different epistemic virtue in order to become critical toward messages. Ethical grounding, based on the personal sense of humanity, is not of course epistemic test of reliability, but it is an alarm bell to become legitimately suspicious toward meanings. The second element of the new epistemic virtue is cogenetic logic (Tateo, 2016).

Meaning is grounded in the building of every belief as a complementary system between “A” and “non-A”. This implies that any meaning is constructed through the relationship with its complementary opposite. The truth emerges in a double dialectic movement (Silva Filho, 2014): through Socratic dialogue and through cogenetic logic. In conclusion, let me try to provide a practical example of this epistemic virtue.

The way to start to discriminate potentially fake news or the tendentious interpretations of facts would be essentially based on an ethic foundation. As in Vico’s wisdom of common sense, I would base my epistemic scrutiny on the imaginative work that allows me to access the other and on the cogenetic logic that assumes every meaning is defined by its relationship with the opposite.

Let’s imagine that we are exposed to a post on social media, in which someone states that a caravan of migrants, which is travelling from Honduras across Central America toward the USA border, is actually made of criminals sent by hostile foreign governments to destabilize the country right before elections. The same post claims that it is a conspiracy and that all the press coverage is fake news.

Finally the post presents some “debunking” pictures showing some athletic young Latino men, with their faces covered by scarves, to demonstrate that the caravan is not made by families with children, but is made by “soldiers” in good shape and who don’t look poor and desperate as the “mainstream” media claim. I do not know whether such a post has ever been made, but I just assembled elements of very common discourses circulating in the social media.

The task is no to assess the nature of this message, its meaning and its reliability. I could rely on the group as a ground for assessing statements, to scrutinize their truth and justification. However, due to the “bubble” effect, I may fall into a simple tautological confirmation, due to the configuration of the network of my relations. I would probably find only posts confirming the statements and delegitimizing the opposite positions. In this case, the fact that the group will empower my epistemic confidence is a very dangerous element.

I could limit my search for alternative positions to establish a dialogue. However, I could not be able, alone, to find information that can help me to assess the statement with respect to its degree of bias. How can I exert my skepticism in a context of post-truth? I propose some initial epistemic moves, based on a common sense approach to the meaning-making.

1) I must be skeptical of every message which uses a violent, aggressive, discriminatory language, and that such kind of message is “fake” by default.

2) I must be skeptical of every message that treats as criminals or is against whole social groups, even on the basis of real isolated events, because this interpretation is biased by default.

3) I must be skeptical of every message that attacks or targets persons for their characteristics rather than discussing ideas or behaviors.

Appreciating the hypothetical post about the caravan by the three rules above mentioned, one will immediately see that it violates all of them. Thus, no matter what is the information collected by my epistemic bubble, I have justified reasons to be skeptical towards it. The foundation of the meaning of the message will not be neither in the group nor in the person. It will be based on the ethical position of common sense’s wisdom.

Contact details: luca@hum.aau.dk

References

Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press.

Bouchard, D. (2013). The nature and origin of language. Oxford: Oxford University Press.

Danesi, M. (1993). Vico, metaphor, and the origin of language. Bloomington: Indiana University Press.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford University Press.

Kant, I. (1993) [1785]. Grounding for the Metaphysics of Morals. Translated by Ellington, James W. (3rd ed.). Indianapolis and Cambridge: Hackett.

Keyes, R. (2004). The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: St. Martin’s.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1) http://www.itdl.org/Journal/Jan_05/article01.htm

Silva Filho, W. J. (2014). Davidson: Dialog, dialectic, interpretation. Utopía y praxis latinoamericana, 7(19).

Tateo, L. (2015). Giambattista Vico and the psychological imagination. Culture & Psychology, 21(2), 145-161.

Tateo, L. (2016). Toward a cogenetic cultural psychology. Culture & Psychology, 22(3), 433-447.

Thao, T. D. (2012). Investigations into the origin of language and consciousness. New York: Springer.

Vähämaa, M. (2018). Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

Vygotsky, L. S. (2012). Thought and language. Cambridge, MA: MIT press.

Wertsck, J. V. (2000). Vygotsky’s Two Minds on the Nature of Meaning. In C. D. Lee & P. Smagorinsky (eds), Vygotskian perspectives on literacy research: Constructing meaning through collaborative inquiry (pp. 19-30). Cambridge: Cambridge University Press.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink: https://wp.me/p1Bfg0-41k

Image by Rigoberto Garcia via Flickr / Creative Commons

 

In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “Imagining a Different Political Economy.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 7-11.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40v

Image by Rachel Adams via Flickr / Creative Commons

 

One cannot ask for a kinder or more complimentary reviewer than Adam Riggio.[1] His main complaint about my book, The Quest for Prosperity, is that “Stylistically, the book suffers from a common issue for many new research books in the humanities and social sciences. Its argument loses some momentum as it approaches the conclusion, and ends up in a more modest, self-restrained place than its opening chapters promised.”

My opening examination of what I see as the misconceptions of some presuppositions used in political economy is a first, necessary step towards an examination of recent capitalist variants (that are heralded as the best prospects for future organization of market exchanges) and for a different approach tor political economy offered by the end of the book. Admittedly, my vision of a radically reframed political economy that exposes some taken for granted concepts, such as scarcity, human nature, competition, and growth is an ambitious task, and perhaps, as Riggio suggests, I should attempt a more detailed articulation of the economy in a sequel.

However, this book does examine alternative frameworks, discusses in some detail what I consider misguided attempts to skirt the moral concerns I emphasize so as to retain the basic capitalist framework, and suggests principles that ought to guide a reframed political economy, one more attentive to the moral principles of solidarity and cooperation, empathy towards fellow members of a community, and an mindful avoidance of grave inequalities that are not limited to financial measures. In this respect, the book delivers more than is suggested by Riggio.

On Questions of Character

Riggio also argues that my

templates for communitarian alternatives to the increasingly brutal culture of contemporary capitalism share an important common feature that is very dangerous for [my] project. They are each rooted in civic institutions, material social structures for education, and socialization. Contrary to how [I] spea[k] of these four inspirations, civil rights and civic institutions alone are not enough to build and sustain a community each member of whom holds a communitarian ethical philosophy and moral sense deep in her heart.

This, too, is true to some extent. Just because I may successfully convince you that you are working with misconceptions about human nature, scarcity, and growth, for example, you may still not modify your behavior. Likewise, just because I may offer brilliant exemplars for how “civil rights and civic institutions” should be organized and legally enshrined does not mean that every member of the community will abide by them and behave appropriately.

Mean-spirited or angry individuals might spoil life for the more friendly and self-controlled ones, and Riggio is correct to point out that “a communitarian ethical philosophy and moral sense deep in [one’s] heart” are insufficient for overcoming the brutality of capitalist greed. But focusing on this set of concerns (rather than offering a more efficient or digitally sophisticated platform for exchanges), Riggio would agree, could be good starting points, and might therefore encourage more detailed analyses of policies and regulation of unfettered capitalist practices.

I could shirk my responsibility here and plead for cover under the label of a philosopher who lacks the expertise of a good old-fashioned social scientist or policy wonk who can advise how best to implement my proposals. But I set myself up to engage political economy in all its manifold facets, and Riggio is correct when he points out that my “analysis of existing institutions and societies that foster communitarian moralities and ethics is detailed enough to show promise, but unfortunately so brief as to leave us without guidance or strategy to fulfill that promise.”

But, when critically engaging not only the latest gimmicks being proposed under the capitalist umbrella (e.g., the gig economy or shared economies) but also their claims about freedom and equal opportunity, I was concerned to debunk pretenses so as to be able to place my own ideas within an existing array of possibilities. In that sense, The Quest for Prosperity is, indeed, more critique than manual, an immanent critique that accounts for what is already being practiced so as to point out inevitable weaknesses. My proposal was offered in broad outlines in the hope of enlisting the likes of Riggio to contribute more details that, over time, would fulfill such promises in a process that can only be, in its enormity, collaborative.

The Strength of Values

Riggio closes his review by saying that I

offered communitarian approaches to morality and ethics as solutions to those challenges of injustice. I think his direction is very promising. But The Quest for Prosperity offers only a sign. If his next book is to fulfill the promise of this one, he must explore the possibilities opened up by the following questions. Can communitarian values overcome the allure of greed? What kind of social, political, and economic structures would we need to achieve that utopian goal?

To be clear, my approach is as much Communitarian as it is Institutionalist, Marxist and heterodox, Popperian and postmodern; I prefer the more traditional terms socialism and communism as alternatives to capitalism in general and to my previous, more sanguine appeal to the notion of “postcapitalism.”

Still, Riggio hones in on an important point: since I insist on theorizing in moral and social (rather than monetary) terms, and since my concern is with views of human nature and the conditions under which we can foster a community of people who exchange goods and services, it stands to reason that the book be assessed in an ethical framework as well, concerned to some degree with how best to foster personal integrity, mutual empathy, and care. The book is as much concerned with debunking the moral pretenses of capitalism (from individual freedom and equal opportunity to happiness and prosperity, understood here in its moral and not financial sense) as with the moral underpinnings (and the educational and social institutions that foster them) of political economy.

In this sense, my book strives to be in line with Adam Smith’s (or even Marx’s) moral philosophy as much as with his political economy. The ongoing slippage from the moral to the political and economic is unavoidable: in such a register the very heart of my argument contends that financial strategies have to consider human costs and that economic policies affect humans as moral agents. But, to remedy social injustice we must deal with political economy, and therefore my book moves from the moral to the economic, from the social to the political.

Questions of Desire

I will respond to Riggio’s two concluding questions directly. The first deals with overcoming the allure of greed: in my view, this allure, as real and pressing as it is, remains socially conditioned, though perhaps linked to unconscious desires in the Freudian sense. Within the capitalist context, there is something more psychologically and morally complex at work that should be exposed (Smith and Marx, in their different analyses, appreciate this dimension of market exchanges and the framing of human needs and wants; later critics, as diverse as Herbert Marcuse and Karl Polanyi, continue along this path).

Wanting more of something—Father’s approval? Mother’s nourishment?—is different from wanting more material possessions or money (even though, in good a capitalist modality, the one seeps into the other or the one is offered as a substitute for the other). I would venture to say that a child’s desire for candy, for example, (candy being an object of desire that is dispensed or withheld by parents) can be quickly satiated when enough is available—hence my long discussion in the book about (the fictions of) scarcity and (the realities of) abundance; the candy can stand for love in general or for food that satisfies hunger, although it is, in fact, neither; and of course the candy can be substituted by other objects of desire that can or cannot be satisfied. (Candy, of course, doesn’t have the socially symbolic value that luxury items, such as iPhone, do for those already socialized.)

Only within a capitalist framework might one accumulate candy not merely to satisfy a sweet tooth or wish for a treat but also as a means to leverage later exchanges with others. This, I suggest, is learned behavior, not “natural” in the classical capitalist sense of the term. The reason for this lengthy explanation is that Riggio is spot on to ask about the allure of greed (given his mention of demand-side markets), because for many defenders of the faith, capitalism is nothing but a large-scale apparatus that satisfies natural human appetites (even though some of them are manufactured).

My arguments in the book are meant not only to undermine such claims but to differentiate between human activities, such as exchange and division of labor (historically found in families and tribes), and competition, greed, accumulation, and concentration of wealth that are specific to capitalism (and the social contract within which it finds psychological and legal protection). One can see, then, why I believe the allure of greed can be overcome through social conditioning and the reframing of human exchanges that satisfy needs and question wants.

Riggio’s concern over abuse of power, regardless of all the corrective structures proposed in the book, deserves one more response. Indeed, laws without enforcement are toothless. But, as I argue throughout the book, policies that attempt to deal with important social issues must deal with the economic features of any structure. What makes the Institutionalist approach to political economy informative is not only the recognition that economic ideals take on different hues when implemented in different institutional contexts, but that economic activity and behavior are culturally conditioned.

Instead of worrying here about a sequel, I’d like to suggest that there is already excellent work being done in the areas of human and civil rights (e.g., Michelle Alexander’s The New Jim Crow (2010) and Matthew Desmond’s Evicted (2016) chronicle the problems of capitalism in different sectors of the economy) so that my own effort is an attempt to establish a set of (moral) values against which existing proposals can be assessed and upon which (economic) policy reform should be built. Highlighting the moral foundation of any economic system isn’t a substitute for paying close attention to the economic system that surrounds and perhaps undermines it; rather, economic realities test the limits of the applicability of and commitment to such foundation.

Contact details: rsassowe@uccs.edu

References

Riggio, Adam. “The True Shape of a Society of Friends.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 40-45.

Sassower, Raphael. The Quest for Prosperity. London, UK: Rowman & Littlefield, 2017.

[1] Special thanks to Dr. Denise Davis for her critical suggestions.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.