Archives For post-truth

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45p

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons

 

In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details: rsassowe@uccs.edu

References

Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Author Information: Luca Tateo, Aalborg University & Federal University of Bahia, luca@hum.aau.dk.

Tateo, Luca. “Ethics, Cogenetic Logic, and the Foundation of Meaning.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44i

Mural entitled “Paseo de Humanidad” on the Mexican side of the US border wall in the city of Heroica Nogales, in Sonora. Art by Alberto Morackis, Alfred Quiróz and Guadalupe Serrano.
Image by Jonathan McIntosh, via Flickr / Creative Commons

 

This essay is in reply to: Miika Vähämaa (2018) Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

In his interesting essay, Vähämaa (2018) discusses two issues that I find particularly relevant. The first one concerns the foundation of meaning in language, which in the era of connectivism (Siemens, 2005) and post-truth (Keyes, 2004) becomes problematic. The second issue is the appreciation of epistemic virtues in a collective context: how the group can enhance the epistemic skill of the individual?

I will try to explain why these problems are relevant and why it is worth developing Vähämaa’s (2018) reflection in the specific direction of group and person as complementary epistemic and ethic agents (Fricker, 2007). First, I will discuss the foundations of meaning in different theories of language. Then, I will discuss the problems related to the stability and liminality of meaning in the society of “popularity”. Finally I will propose the idea that the range of contemporary epistemic virtues should be integrated by an ethical grounding of meaning and a co-genetic foundation of meaning.

The Foundation of Meaning in Language

The theories about the origins of human language can be grouped in four main categories, based on the elements characterizing the ontogenesis and glottogenesis.

Sociogenesis Hypothesis (SH): it is the idea that language is a conventional product, that historically originates from coordinated social activities and it is ontogenetically internalized through individual participation to social interactions. The characteristic authors in SH are Wundt, Wittgenstein and Vygotsky (2012).

Praxogenesis Hypothesis (PH): it is the idea that language historically originates from praxis and coordinated actions. Ontogenetically, the language emerges from senso-motory coordination (e.g. gaze coordination). It is for instance the position of Mead, the idea of linguistic primes in Smedslund (Vähämaa, 2018) and the language as action theory of Austin (1975).

Phylogenesis Hypothesis (PhH): it is the idea that humans have been provided by evolution with an innate “language device”, emerging from the evolutionary preference for forming social groups of hunters and collective long-duration spring care (Bouchard, 2013). Ontogenetically, language predisposition is wired in the brain and develops in the maturation in social groups. This position is represented by evolutionary psychology and by innatism such as Chomsky’s linguistics.

Structure Hypothesis (StH): it is the idea that human language is a more or less logic system, in which the elements are determined by reciprocal systemic relationships, partly conventional and partly ontic (Thao, 2012). This hypothesis is not really concerned with ontogenesis, rather with formal features of symbolic systems of distinctions. It is for instance the classical idea of Saussure and of the structuralists like Derrida.

According to Vähämaa (2018), every theory of meaning has to deal today with the problem of a terrific change in the way common sense knowledge is produced, circulated and modified in collective activities. Meaning needs some stability in order to be of collective utility. Moreover, meaning needs some validation to become stable.

The PhH solves this problem with a simple idea: if humans have survived and evolved, their evolutionary strategy about meaning is successful. In a natural “hostile” environment, our ancestors must have find the way to communicate in such a way that a danger would be understood in the same way by all the group members and under different conditions, including when the danger is not actually present, like in bonfire tales or myths.

The PhH becomes problematic when we consider the post-truth era. What would be the evolutionary advantage to deconstruct the environmental foundations of meaning, even in a virtual environment? For instance, what would be the evolutionary advantage of the common sense belief that global warming is not a reality, considered that this false belief could bring mankind to the extinction?

StH leads to the view of meaning as a configuration of formal conditions. Thus, stability is guaranteed by structural relations of the linguistic system, rather than by the contribution of groups or individuals as epistemic agents. StH cannot account for the rapidity and liminality of meaning that Vähämaa (2018) attributes to common sense nowadays. SH and PH share the idea that meaning emerges from what people do together, and that stability is both the condition and the product of the fact that we establish contexts of meaningful actions, ways of doing things in a habitual way.

The problem is today the fact that our accelerated Western capitalistic societies have multiplied the ways of doing and the number of groups in society, decoupling the habitual from the common sense meaning. New habits, new words, personal actions and meanings are built, disseminated and destroyed in short time. So, if “Our lives, with regard to language and knowledge, are fundamentally bound to social groups” (Vähämaa, 2018, p. 169) what does it happen to language and to knowledge when social groups multiply, segregate and disappear in a short time?

From Common Sense to the Bubble

The grounding of meaning in the group as epistemic agent has received a serious stroke in the era of connectivism and post-truth. The idea of connectivism is that knowledge is distributed among the different agents of a collective network (Siemens, 2005). Knowledge does not reside into the “mind” or into a “memory”, but is rather produced in bits and pieces, that the epistemic agent is required to search, and to assemble through the contribution of the collective effort of the group’s members.

Thus, depending on the configuration of the network, different information will be connected, and different pictures of the world will emerge. The meaning of the words will be different if, for instance, the network of information is aggregated by different groups in combination with, for instance, specific algorithms. The configuration of groups, mediated by social media, as in the case of contemporary politics (Lewandowsky, Ecker & Cook, 2017), leads to the reproduction of “bubbles” of people that share the very same views, and are exposed to the very same opinions, selected by an algorithm that will show only the content compliant with their previous content preferences.

The result is that the group loses a great deal of its epistemic capability, which Vähämaa (2018) suggests as a foundation of meaning. The meaning of words that will be preferred in this kind of epistemic bubble is the result of two operations of selection that are based on popularity. First, the meaning will be aggregated by consensual agents, rather than dialectic ones. Meaning will always convergent rather than controversial.

Second, between alternative meanings, the most “popular” will be chosen, rather than the most reliable. The epistemic bubble of connectivism originates from a misunderstanding. The idea is that a collectivity has more epistemic force than the individual alone, to the extent that any belief is scrutinized democratically and that if every agent can contribute with its own bit, the knowledge will be more reliable, because it is the result of a constant and massive peer-review. Unfortunately, the events show us a different picture.

Post-truth is actually a massive action of epistemic injustice (Fricker, 2007), to the extent that the reliability of the other as epistemic agent is based on criteria of similarity, rather than on dialectic. One is reliable as long as it is located within my own bubble. Everything outside is “fake news”. The algorithmic selection of information contributes to reinforce the polarization. Thus, no hybridization becomes possible, the common sense (Vähämaa, 2018) is reduced to the common bubble. How can the epistemic community still be a source of meaning in the connectivist era?

Meaning and Common Sense

SH and PH about language point to a very important historical source: the philosopher Giambattista Vico (Danesi, 1993; Tateo, 2015). Vico can be considered the scholar of the common sense and the imagination (Tateo, 2015). Knowledge is built as product of human experience and crystallized into the language of a given civilization. Civilization is the set of interpretations and solutions that different groups have found to respond to the common existential events, such as birth, death, mating, natural phenomena, etc.

According to Vico, all the human beings share a fate of mortal existence and rely on each other to get along. This is the notion of common sense: the profound sense of humanity that we all share and that constitutes the ground for human ethical choices, wisdom and collective living. Humans rely on imagination, before reason, to project themselves into others and into the world, in order to understand them both. Imagination is the first step towards the understanding of the Otherness.

When humans loose contact with this sensus communis, the shared sense of humanity, and start building their meaning on egoism or on pure rationality, civilizations then slip into barbarism. Imagination gives thus access to the intersubjectivity, the capability of feeling the other, while common sense constitutes the wisdom of developing ethical beliefs that will not harm the other. Vico ideas are echoed and made present by the critical theory:

“We have no doubt (…) that freedom in society is inseparable from enlightenment thinking. We believe we have perceived with equal clarity, however, that the very concept of that thinking (…) already contains the germ of the regression which is taking place everywhere today. If enlightenment does not [engage in] reflection on this regressive moment, it seals its own fate (…) In the mysterious willingness of the technologically educated masses to fall under the spell of any despotism, in its self-destructive affinity to nationalist paranoia (…) the weakness of contemporary theoretical understanding is evident.” (Horkheimer & Adorno, 2002, xvi)

Common sense is the basis for the wisdom, that allows to question the foundational nature of the bubble. It is the basis to understand that every meaning is not only defined in a positive way, but is also defined by its complementary opposite (Tateo, 2016).

When one uses the semantic prime “we” (Vähämaa, 2018), one immediately produces a system of meaning that implies the existence of a “non-we”, one is producing otherness. In return, the meaning of “we” can only be clearly defined through the clarification of who is “non-we”. Meaning is always cogenetic (Tateo, 2015). Without the capability to understand that by saying “we” people construct a cogenetic complex of meaning, the group is reduced to a self confirming, self reinforcing collective, in which the sense of being a valid epistemic agent is actually faked, because it is nothing but an act of epistemic arrogance.

How we can solve the problem of the epistemic bubble and give to the relationship between group and person a real epistemic value? How we can overcome the dangerous overlapping between sense of being functional in the group and false beliefs based on popularity?

Complementarity Between Meaning and Sense

My idea is that we must look in that complex space between the “meaning”, understood as a collectively shared complex of socially constructed significations, and the “sense”, understood as the very personal elaboration of meaning which is based on the person’s uniqueness (Vygotsky, 2012; Wertsck, 2000). Meaning and sense feed into each other, like common sense and imagination. Imagination is the psychic function that enables the person to feel into the other, and thus to establish the ethical and affective ground for the common sense wisdom. It is the empathic movement on which Kant will later on look for a logic foundation.

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” (Kant 1993, p. 36. 4:429)

I would further claim that maybe they feed into each other: the logic foundation is made possible by the synthetic power of empathic imagination. Meaning and sense feed into each other. On the one hand, the collective is the origin of internalized psychic activities (SH), and thus the basis for the sense elaborated about one’s own unique life experience. On the other hand, the personal sense constitutes the basis for the externalization of the meaning into the arena of the collective activities, constantly innovating the meaning of the words.

So, personal sense can be a strong antidote to the prevailing force of the meaning produced for instance in the epistemic bubble. My sense of what is “ought”, “empathic”, “human” and “ethic”, in other words my wisdom, can help me to develop a critical stance towards meanings that are build in a self-feeding uncritical way.

Can the dialectic, complementary and cogenetic relationship between sense and meaning become the ground for a better epistemic performance, and for an appreciation of the liminal meaning produced in contemporary societies? In the last section, I will try to provide arguments in favor of this idea.

Ethical Grounding of Meaning

If connectivistic and post-truth societies produce meanings that are based on popularity check, rather than on epistemic appreciation, we risk to have a situation in which any belief is the contingent result of a collective epistemic agent which replicates its patterns into bubbles. One will just listen to messages that confirm her own preferences and belief and reject the different ones as unreliable. Inside the bubble there is no way to check the meaning, because the meaning is not cogenetic, it is consensual.

For instance, if I read and share a post on social media, claiming that migrants are the main criminal population, despite my initial position toward the news, there is the possibility that within my group I will start to see only posts confirming the initial fact. The fact can be proven wrong, for instance by the press, but the belief will be hard to change, as the meaning of “migrant” in my bubble is likely to continue being that of “criminal”. The collectivity will share an epistemically unjust position, to the extent that it will attribute a lessened epistemic capability to those who are not part of the group itself. How can one avoid that the group is scaffolding the “bad” epistemic skills, rather than empowering the individual (Vähämaa, 2018)?

The solution I propose is to develop an epistemic virtue based on two main principles: the ethical grounding of meaning and the cogenetic logic. The ethical grounding of meaning is directly related to the articulation between common sense and wisdom in the sense of Vico (Tateo, 2015). In a post-truth world in which we cannot appreciate the epistemic foundation of meaning, we must rely on a different epistemic virtue in order to become critical toward messages. Ethical grounding, based on the personal sense of humanity, is not of course epistemic test of reliability, but it is an alarm bell to become legitimately suspicious toward meanings. The second element of the new epistemic virtue is cogenetic logic (Tateo, 2016).

Meaning is grounded in the building of every belief as a complementary system between “A” and “non-A”. This implies that any meaning is constructed through the relationship with its complementary opposite. The truth emerges in a double dialectic movement (Silva Filho, 2014): through Socratic dialogue and through cogenetic logic. In conclusion, let me try to provide a practical example of this epistemic virtue.

The way to start to discriminate potentially fake news or the tendentious interpretations of facts would be essentially based on an ethic foundation. As in Vico’s wisdom of common sense, I would base my epistemic scrutiny on the imaginative work that allows me to access the other and on the cogenetic logic that assumes every meaning is defined by its relationship with the opposite.

Let’s imagine that we are exposed to a post on social media, in which someone states that a caravan of migrants, which is travelling from Honduras across Central America toward the USA border, is actually made of criminals sent by hostile foreign governments to destabilize the country right before elections. The same post claims that it is a conspiracy and that all the press coverage is fake news.

Finally the post presents some “debunking” pictures showing some athletic young Latino men, with their faces covered by scarves, to demonstrate that the caravan is not made by families with children, but is made by “soldiers” in good shape and who don’t look poor and desperate as the “mainstream” media claim. I do not know whether such a post has ever been made, but I just assembled elements of very common discourses circulating in the social media.

The task is no to assess the nature of this message, its meaning and its reliability. I could rely on the group as a ground for assessing statements, to scrutinize their truth and justification. However, due to the “bubble” effect, I may fall into a simple tautological confirmation, due to the configuration of the network of my relations. I would probably find only posts confirming the statements and delegitimizing the opposite positions. In this case, the fact that the group will empower my epistemic confidence is a very dangerous element.

I could limit my search for alternative positions to establish a dialogue. However, I could not be able, alone, to find information that can help me to assess the statement with respect to its degree of bias. How can I exert my skepticism in a context of post-truth? I propose some initial epistemic moves, based on a common sense approach to the meaning-making.

1) I must be skeptical of every message which uses a violent, aggressive, discriminatory language, and that such kind of message is “fake” by default.

2) I must be skeptical of every message that treats as criminals or is against whole social groups, even on the basis of real isolated events, because this interpretation is biased by default.

3) I must be skeptical of every message that attacks or targets persons for their characteristics rather than discussing ideas or behaviors.

Appreciating the hypothetical post about the caravan by the three rules above mentioned, one will immediately see that it violates all of them. Thus, no matter what is the information collected by my epistemic bubble, I have justified reasons to be skeptical towards it. The foundation of the meaning of the message will not be neither in the group nor in the person. It will be based on the ethical position of common sense’s wisdom.

Contact details: luca@hum.aau.dk

References

Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press.

Bouchard, D. (2013). The nature and origin of language. Oxford: Oxford University Press.

Danesi, M. (1993). Vico, metaphor, and the origin of language. Bloomington: Indiana University Press.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford University Press.

Kant, I. (1993) [1785]. Grounding for the Metaphysics of Morals. Translated by Ellington, James W. (3rd ed.). Indianapolis and Cambridge: Hackett.

Keyes, R. (2004). The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: St. Martin’s.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1) http://www.itdl.org/Journal/Jan_05/article01.htm

Silva Filho, W. J. (2014). Davidson: Dialog, dialectic, interpretation. Utopía y praxis latinoamericana, 7(19).

Tateo, L. (2015). Giambattista Vico and the psychological imagination. Culture & Psychology, 21(2), 145-161.

Tateo, L. (2016). Toward a cogenetic cultural psychology. Culture & Psychology, 22(3), 433-447.

Thao, T. D. (2012). Investigations into the origin of language and consciousness. New York: Springer.

Vähämaa, M. (2018). Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

Vygotsky, L. S. (2012). Thought and language. Cambridge, MA: MIT press.

Wertsck, J. V. (2000). Vygotsky’s Two Minds on the Nature of Meaning. In C. D. Lee & P. Smagorinsky (eds), Vygotskian perspectives on literacy research: Constructing meaning through collaborative inquiry (pp. 19-30). Cambridge: Cambridge University Press.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Paul Faulkner, University of Sheffield, paul.faulkner@sheffield.ac.uk

Faulkner, Paul. “Fake Barns, Fake News.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 16-21.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Y4

Image by Kathryn via Flickr / Creative Commons

 

The Twitter feed of Donald Trump regularly employs the hashtag #FakeNews, and refers to mainstream news outlets — The New York Times, CNN etc. — as #FakeNews media. Here is an example from May 28, 2017.

Whenever you see the words ‘sources say’ in the fake news media, and they don’t mention names …

… it is very possible that those sources don’t exist but are made up by the fake news writers. #FakeNews is the enemy!

It is my opinion that many of the leaks coming out of the White House are fabricated lies made up by the #FakeNews media.[1]

Lies and Falsehoods

Now it is undoubted that both fake news items and fake news media exist. A famous example of the former is the BBC Panorama broadcast about spaghetti growers on April Fool’s Day, 1957.[2] A more recent, and notorious example of the latter is the website ChristianTimesNewspaper.com set up by Cameron Harris to capitalise on Donald Trump’s support during the election campaign (See Shane 2017).

This website published exclusively fake news items; items such as “Hillary Clinton Blames Racism for Cincinnati Gorilla’s Death”, “NYPD Looking to Press Charges Against Bill Clinton for Underage Sex Ring”, and “Protestors Beat Homeless Veteran to Death in Philadelphia”. And it found commercial success with the headline: “BREAKING: ‘Tens of thousands’ of fraudulent Clinton votes found in Ohio warehouse”. This story was eventually shared with six million people and gained widespread traction, which persisted even after it was shown to be fake.

Fake news items and fake news media exist. However, this paper is not interested in this fact so much as the fact that President Trumps regularly calls real news items fake, and calls the established news media the fake news media. These aspersions are intended to discredit news items and media. And they have had some remarkable success in doing so: Trump’s support has shown a good resistance to the negative press Trump has received in the mainstream media (Johson 2017).

Moreover, there is some epistemological logic to this: these aspersions insinuate a skeptical argument, and, irrespective of its philosophical merits, this skeptical argument is easy to latch onto and hard to dispel. An unexpected consequence of agreeing with Trump’s aspersions is that these aspersions can themselves be epistemologically rationalized. This paper seeks to develop these claims.

An Illustration from the Heartlands

To start, consider what is required for knowledge. While there is substantial disagreement about the nature of knowledge — finding sufficient conditions is difficult — there is substantial agreement on what is required for knowledge. In order to know: (1) you have to have got it right; (2) it cannot be that you are likely to have got it wrong; and (3) you cannot think that you are likely to have got it wrong. Consider these three necessary conditions on knowledge.

You have to have got it right. This is the most straightforward requirement: knowledge is factive; ‘S knows that p’ entails ‘p’. You cannot know falsehoods, only mistakenly think that you know them. So if you see what looks to you to be a barn on the hill and believe that there is a barn on the hill, you fail to know that there is a barn on the hill if what you are looking at is in fact a barn façade — a fake barn.

It cannot be that you are likely to have got it wrong. This idea is variously expressed in the claims that there is a reliability (Goldman 1979), sensitivity (Nozick 1981), safety (Sosa 2007), or anti-luck (Zagzebski 1994) condition on knowing. That there is such a condition has been acknowledged by epistemologists of an internalist persuasion, (Alston 1985, Peacocke 1986). And it is illustrated by the subject’s failure to know in the fake barn case (Goldman 1976). This case runs as follows.

Image by Sonja via Flickr / Creative commons

 

Henry is driving through the countryside, sees a barn on the hill, and forms the belief that there is a barn on the hill. Ordinarily, seeing that there is a barn on the hill would enable Henry to know that there is a barn on the hill. But the countryside Henry is driving through is peculiar in that there is a proliferation of barn façades — fake barns — and Henry, from the perspective of the highway, cannot tell a genuine barn from a fake barn.

It follows that he would equally form the belief that there is a barn on the hill if he were looking at a fake barn. So his belief that there is a barn on the hill is as likely to be wrong as right. And since it is likely that he has got it wrong, he doesn’t know that there is a barn on the hill. (And he doesn’t know this even though he is looking at a barn on the hill!)

You cannot think that you are likely to have got it wrong. This condition can equally be illustrated by the fake barns case. Suppose Henry learns, say from a guidebook to this part of the countryside, that fake barns are common in this area. In this case, he would no longer believe, on seeing a barn on the hill, that there was a barn on the hill. Rather, he would retreat to the more cautious belief that there was something that looked like a barn on the hill, which might be a barn or might be a barn façade. Or at least this is the epistemically correct response to this revelation.

And were Henry to persist in his belief that there is a barn on the hill, there would be something epistemically wrong with this belief; it would be unreasonable, or unjustified. Such a belief, it is then commonly held, could not amount to knowledge, (Sosa 2007). Notice: the truth of Henry’s worry about the existence of fake barns doesn’t matter here. Even if the guidebook is a tissue of falsehoods and there are no fake barns, once Henry believes that fake barns abound, it ceases to be reasonable to believe that a seen barn on the hill is in fact a barn on the hill.

Truth’s Resilience: A Mansion on a Hill

The fake barns case centres on a case of acquiring knowledge by perception: getting to know that there is a barn on the hill by seeing that there is a barn on the hill. Or, more generally: getting to know that p by seeing that p. The issue of fake news centres on our capacity to acquire knowledge from testimony: getting to know that p by being told that p. Ordinarily, testimony, like perception, is a way of acquiring knowledge about the world: just as seeing that p is ordinarily a way of knowing that p, so too is being told that p. And like perception, this capacity for acquiring knowledge can be disrupted by fakery.

This is because the requirements on knowledge stated above are general requirements — they are not specific to the perceptual case. Applying these requirements to the issue of fake news then reveals the following.

You have to have got it right. From this it follows that there is no knowledge to be got from the fake news item. One cannot get to know that the Swiss spaghetti harvesters had a poor year in 1957, or that Randall Prince stumbled across the ballot boxes. If it is fake news that p, one cannot get to know that p, any more than one can get to know that there is a barn on a hill when the only thing on the hill is a fake. One can get to know other things: that Panorama said that such and such; or that the Christian Times Newspaper said that such and such. But one cannot get to know the content said.

It cannot be that you are likely to have got it wrong. To see what follows from this, suppose that President Trump is correct and the mainstream news media is really the fake news media. On this supposition, most of the news items published by this news media are fake news items. The epistemic position of a consumer of news media is then parallel to Henry’s epistemic position in driving through fake barn country. Even if Henry is looking at a (genuine) barn on the hill, he is not in a position to know that there is a barn on the hill given that he is in fake barn country and, as such, is as likely wrong as right with respect to his belief that there is a barn on the hill.

Similarly, even if the news item that p is genuine and not fake, a news consumer is not in a position to get to know that p insofar as fakes abound and their belief that p is equally likely to be wrong as right. This parallel assumes that the epistemic subject cannot tell real from fake. This supposition is built into the fake barn case: from the road Henry cannot discriminate real from fake barns. And it follows in the fake news case from supposition that President Trump is correct in his aspersions.

That is, if it is really true that The New York Times and CNN are fake news media, as supposed, then this shows the ordinary news consumer is wrong to discriminate between these news media and Christian Newspaper Times, say. And it thereby shows that the ordinary news consumer possesses the same insensitivity to fake news items that Henry possesses to fake barns. So if President Trump is correct, there is no knowledge to be had from the mainstream news media. Of course, he is not correct: these are aspersions not statements of fact. However, even aspersions can be epistemically undermining as can be seen next.

You cannot think that you are likely to have got it wrong. Thus, in the fake barns case, if Henry believes that fake barns proliferate, he cannot know there is a barn on the hill on the basis of seeing one. The truth of Henry’s belief is immaterial to this conclusion. Now let ‘Trump’s supporters’ refer to those who accept Trump’s aspersions of the mainstream news media. Trump’s supporters thereby believe that mainstream news items concerning Trump are fake news items, and believe more generally that these news media are fake news media (at least when it comes to Trump-related news items).

It follows that a Trump supporter cannot acquire knowledge from the mainstream news media when the news is about Trump. And it also follows that Trump supporters are being quite epistemically reasonable in their rejection of mainstream news stories about Trump. (One might counter, ‘at least insofar as their starting point is epistemically reasonable’; but it will turn out below that an epistemological rationalization can be given of this starting point.)

Image by Sonja via Flickr / Creative Commons

 

Always Already Inescapably Trapped

Moreover, arguably it is not just the reasonableness of accepting mainstream news stories about Trump that is undermined because Trump’s aspersions insinuate the following skeptical argument. Suppose again that Trump’s aspersions of the mainstream news media are correct, and call this the fake news hypothesis. Given the fake news hypothesis it follows that we lack the capacity to discriminate fake news items from real news items. Given the fake news hypothesis combined with this discriminative incapacity, the mainstream news media is not a source of knowledge about Trump; that is, it is not a source of knowledge about Trump even if its news items are known and presented as such.

At this point, skeptical logic kicks in. To illustrate this, consider the skeptical hypothesis that one is a brain-in-a-vat. Were one a brain-in-vat, perception would not be a source of knowledge. So insofar as one thinks that perception is a source of knowledge, one needs a reason to reject the skeptical hypothesis. But any reason one ordinarily has, one lacks under the supposition that the skeptical hypothesis is true. Thus, merely entertaining the skeptical hypothesis as true threatens to dislodge one’s claim to perceptual knowledge.

Similarly, the fake news hypothesis entails that the mainstream news media is not a source of knowledge about Trump. Since this conclusion is epistemically unpalatable, one needs a reason to reject the fake news hypothesis. Specifically, one needs a reason for thinking that one can discriminate real Trump-related news items from fake ones. But the reasons one ordinarily has for this judgement are undermined by the supposition that the fake news hypothesis is true.

Thus, merely entertaining this hypothesis as true threatens to dislodge one’s claim to mainstream news-based knowledge about Trump. Three things follow. First, Trump supporters’ endorsement of the fake news hypothesis does not merely make it reasonable to reject mainstream media claims about Trump—by the fake barns logic—this endorsement further supports a quite general epistemic distrust on the mainstream news media—by this skeptical reasoning. (It is not just that the mainstream news media conveys #FakeNews, it is the #FakeNews Media.)

Second, through presenting the fake news hypothesis, Trump’s aspersions of mainstream media encourage us to entertain a hypothesis that insinuates a skeptical argument with this radical conclusion. And if any conclusion can be drawn from philosophical debate on skepticism, it is that it is hard to refute sceptical reasoning once one is in the grip of it. Third, what is thereby threatened is both our capacity to acquire Trump-related knowledge that would ground political criticism, and our epistemic reliance on the institution that provides a platform for political criticism. Given these epistemic rewards, Trump’s aspersions of the mainstream news media have a clear political motivation.

Aspersions on the Knowledge of the People

However, I’d like to end by considering their epistemic motivation. Aren’t groundless accusations of fakery straightforwardly epistemically unreasonable? Doesn’t the fake news hypothesis have as much to recommend it as the skeptical hypothesis that one is a brain-in-a-vat? That is, to say doesn’t it have very little to recommend it? Putting aside defences of the epistemic rationality of skepticism, the answer is still equivocal. From one perspective: yes, these declarations of fakery have little epistemic support.

This is the perspective of the enquirer. Supposing a given news item addresses the question of whether p, then where the news item declares p, Trump declares not-p. The epistemic credentials of these declarations then come down to which tracks matters of evidence etc., and while each case would need to be considered individually, it would be reasonable to speculate that the cannons of mainstream journalism are the epistemically superior.

However, from another perspective: no, these declarations of fakery are epistemically motivated. This is the perspective of the believer. For suppose that one is a Trump supporter, as Trump clearly is, and so believes the fake news hypothesis. Given this hypothesis, the truth of a mainstream news item about Trump is immaterial to the epistemic standing of a news consumer. Even if the news item is true, the news consumer can no more learn from it than Henry can get to know that there is a barn on the hill by looking at one.

But if the truth of a Trump-related news item is immaterial to the epistemic standing of a news consumer, then it seems that epistemically, when it comes to Trump-related news, the truth simply doesn’t matter. But to the extent that the truth doesn’t matter, there really is no distinction to be drawn between the mainstream media and the fake news media when it comes to Trump-related news items. Thus, there is a sense in which the fake news hypothesis is epistemically self-supporting.

Contact details: paul.faulkner@sheffield.ac.uk

References

Alston, W. 1985. “Concepts of Justification”. The Monist 68 (1).

Johnson, J. and Weigel, D. 2017. “Trump supporters see a successful president — and are frustrated with critics who don’t”. The Washington Post. 2017. Available from http://wapo.st/2lkwi96.

Goldman, Alvin. 1976. “Discrimination and Perceptual Knowledge”. Journal of Philosophy 73:771-791.

Goldman, Alvin 1979. “What Is Justified Belief?”. In Justification and Knowledge, edited by G. S. Pappas. Dordrecht: D.Reidel.

Nozick, R. 1981. Philosophical Explanations. Cambridge, MA.: Harvard University Press.

Peacocke, C. 1986. Thoughts: An Essay on Content. Oxford: Basil Blackwell.

Shane, Scott. “From Headline to Photograph, a Fake News Masterpiece”. The New York Times 2017. Available from https://nyti.ms/2jyOcpR.

Sosa, Ernest. 2007. A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume 1. Oxford: Clarendon Press.

Zagzebski, L. 1994. “The Inescapability of Gettier Problems”. The Philosophical Quarterly 44 (174):65-73.

[1] See <https://twitter.com/realDonaldTrump&gt;.

[2] See <http://news.bbc.co.uk/onthisday/hi/dates/stories/april/1/newsid_2819000/2819261.stm&gt;.

Author Information: Gregory Nelson, Northern Arizona University, nelsong@vt.edu

Nelson, Gregory. “Putting The Deceptive Activist into Conversation: A Review and a Response to Rappert.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 33-35.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Oe

Please refer to:

Image credit: Irene Publishing

The Deceptive Activist
Brian Martin
Irene Publishing (Creative Commons Attribution 2.0)
168 pp.
http://www.bmartin.cc/pubs/17da/index.html

Brian Martin’s The Deceptive Activist begins a critical and timely commentary on the role and use of lying and deception in the realm of politics. According to Martin, lying and deception are as mutually constitutive of social interactions as technologies of truth-telling. Lying and truth-telling are two sides of the same coin of communication. Instead of depreciating lying and deception as things to avoid on Kantian moral grounds Martin makes the case that lying and deceit are quotidian and fundamental and natural to human communication.

Martin wants readers to strategically think about the role of lying and deception using context dependent analysis of how deception can be beneficial in certain circumstances. Martin “…aims in this book to highlight the tensions around activism, openness and honesty.”[1] The central argument of the book is that lying and deception are critical and routinely deployed tools that activists use to pursue social change. Instead of debating the moral status of deception in a zero-sum game he asks readers to think of role of deception by strategically analyzing the use of the means of lying and deceit vis à vis an end goal of effecting political change through non-violence and harm reduction.

A Proper Forum

In Brian Rappert’s review of Brain Martin’s The Deceptive Activist Rappert raises the critical question of the proper forum for having a discussion on a book about deception and the use of deception in society. Rappert’s call for a forum for this discussion cannot be overstated. The use of deception is a slippery slope as its use requires an evaluation of the means deployed and the ends desired. History is rife with examples of noble attempts to pursue noble ends using means that in the end become revealed as ethically compromised and corrupting of the whole project. Rappert’s review of The Deceptive Activist lays the ground for the emergence of a discussion. Certainly a book review cannot begin to address all of the careful, meticulous, and robust debate and discussion needed to begin to formulate an emergent discussion on lying and deception in more neutral and strategic ways, however, we can begin to use Martin’s work as an opportunity to acknowledge the pervasive role of deception even in the circles of activists who promote justice, peace, compassion, and empathy.

It would be beneficial to develop an edited volume on lying and deception in society. Science and Technology Studies offers us the ability to conceptualize lying and deception as social and political technologies deployed in the wielding of power. The nuance that Martin’s account brings is the readiness to discuss these technologies as useful tools in activist endeavors to pursue their ideals of change and justice. Martin gives readers frequent examples of how powerful actors use deception to control narratives of their activities in order to positively influence the perception of their image. For Martin the crucial work “…should be to work out when deception is necessary or valuable.”[2] He proposes a criteria of evaluation to evaluate when deception should be deployed based on “harm, fairness, participation, and prefiguration.”[3] His criteria is applicable to activist decisions of when to keep a secret, leak information, plan an action, communicate confidentially, infiltrate the opposition, deploying masks at a protest, or circulating disinformation about a political opponent.

However, in a world in which deception is normalized, his criteria runs the risk of ignoring how deceit, when mobilized by powerful actors, can threaten the less powerful. Developing a means to evaluate deploying deception should be organized by small groups of activists without a way to condemn the use of deceit by the powerful to harm the less powerful leaves the reader wanting more. Martin’s criteria were developed specifically to evaluate when deception might be justified by activist groups who have asymmetrical power relations to the wielders of state and corporate power. The tension that emerges from Martin’s book is between the use of deception by small groups in contrast to large and highly centralized powerful state authorities. Martin explains, “By being at the apex of a bureaucratic organization or prestige system, authorities have more power and a greater ability to prevent any adverse reactions due to deceptions that serve their interests.”[4]

Deception and Defactualization

Martin attempts to negotiate around this problem of recognizing deception as an important tool in activist struggles while also condemning history’s greatest abuses of deception by defining an assessment criteria to evaluate the context and nuance of when deception should be used in according to an ethic of minimal harm. Martin suggests “… assessments are dependent on the context. Still, there are considerable differences in the possible harms involved.” The way out of the ethical tensions that arise when those seeking to do good use the means of deception is to turn to assessing “situations according to the features of effective nonviolent action.”[5] I am not convinced that this enough to effectively deal with the dilemmas that arise when the power of deception is harnessed even in search of what are seemingly good and just ends. After all do we want to live in a world in which the ends justify the means, or the means become the ends in themselves? I can think of plenty examples in which this type of thinking bleeds.

Martin’s work calls us to reconsider the critiques of deception developed by Hannah Arendt in the Crisis of the Republic. Ardent writes, “In the realm of politics, where secrecy and deliberate deception have always played a significant role, self-deception is the danger par excellence; the self-deceived deceiver loses all contact with not only his audience, but also the real world, which still will catch up with him, because he can remove his mind from it but not his body.”[6] The dangerous step in the use of the means and power of deception in the pursuit of just ends lies in the corruption of those ends through defactualization.

Defactualization is a term used by Arendt in which the self-deceived loses the ability to distinguish between fact and fiction. The defactualization of the world, created by the self-deceiver, engulfs them because no longer can the self-deceiver see reality as it stands. The self-deceiver accommodates the facts to suit his or her assumptions: the process of defactualization. The actor becomes blind through his lies and can no longer distinguish truth and false. Martin does not leave a critique of self-deception by the way side, but his brief treatment of it at the end of his work forces us to find the space in which we can have a more robust and developed conversation per Rappert’s concern.

In the post-truth world, The Deceptive Activist is an immensely powerful work that helps to propel us to critically and strategically examine deception, in our own practices, in the era of the grand master of deception: Trump. Daily we are bombarded by various deceptions through the President’s Twitter. Exposing the number of Trump’s lies from inauguration crowd size to healthcare to climate change to taxes is a tiresome and arduous task. When one lie is exposed another is already communicated. The extensive amount of lies leveraged on a daily basis deflates the power of activists to expose and reveal the lies.

In the post-truth era the spectacle of exposing lies and deceptions has become so routine it loses meaning and becomes part of the static of public discourse on contemporary events. There is no more shock value in the exposure of lies. Lying is normalized to the point of meaninglessness. While Martin’s work demonstrates crucial analysis into the how lying and deception are fundamental to everyday interactions, the acceptance of this reality should be constantly questioned and critically analyzed. The Deceptive Activist carefully paints a spectrum of how lying is used in everyday human relationships to reflect on the need for activists to practice critical self-analysis of the methods of deception they often deploy in their agendas to pursue change in society. Martin concludes by discussing what so concerned Hannah Arendt over 50 years ago: self-deception. This even more dangerous form of deception should be questioned. In the Trumpian age we must find the space to have discussions on deception, lying, and defactualization while resisting the temptation to self-deceive.

References

Arendt, Hannah. Crises of the Republic; Lying in Politics, Civil Disobedience on Violence, Thoughts on Politics, and Revolution. 1st ed. ed.  New York: Harcourt Brace Jovanovich, 1972.

Martin, Brian. The Deceptive Activist. Sparsnas, Sweden: Irene Publishing, 2017.

Rappert, Brian. “Brian Martin’s The Deceptive Activist: A Review.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 52-55.

[1] Brian Martin, The Deceptive Activist (Sparsnas, Sweden: Irene Publishing, 2017), 3.

[2] Ibid., 156.

[3] Ibid., 153.

[4] Ibid., 25.

[5] Ibid., 144.

[6] Hannah Arendt, Crises of the Republic; Lying in Politics, Civil Disobedience on Violence, Thoughts on Politics, and Revolution, 1st ed. ed. (New York: Harcourt Brace Jovanovich, 1972), 36.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Veritism as Fake Philosophy: Reply to Baker and Oreskes.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 47-51.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3M3

Image credit: elycefeliz, via flickr

John Stuart Mill and Karl Popper would be surprised to learn from Baker and Oreskes (2017) that freedom is a ‘non-cognitive’ value. Insofar as freedom—both the freedom to assert and the freedom to deny—is a necessary feature of any genuine process of inquiry, one might have thought that it was one of the foundational values of knowledge. But of course, Baker and Oreskes are using ‘cognitive’ in a more technical sense, one introduced by the logical positivists and remains largely intact in contemporary analytic epistemology and philosophy of science. It was also prevalent in post-war history and sociology of science prior to the rise of STS. This conception of the ‘cognitive’ trades on a clear distinction between what lies ‘inside’ and ‘outside’ a conceptual framework—in this case, the conceptual framework of science. But there’s a sting in the tail.

An Epistemic Game

Baker and Oreskes don’t seem to realize that this very conception of the ‘cognitive’ is in the post-truth mould that I defend. After all, for the positivists, ‘truth’ is a second order concept that lacks any determinate meaning except relative to the language in terms of which knowledge claims can be expressed. It was in this spirit that Rudolf Carnap thought that Thomas Kuhn’s ‘paradigm’ had put pragmatic flesh on the positivists’ logical bones (Reisch 1991). (It is worth emphasizing that Carnap passed this judgement before Kuhn’s fans turned him into the torchbearer for ‘post-positivist’ philosophy of science.) At the same time, this orientation led the positivists to promote—and try to construct—a universal language of science into which all knowledge claims could be translated and evaluated.

All of this shows that the positivists weren’t ‘veritists’ because, unlike Baker and Oreskes, they didn’t presuppose the existence of some univocal understanding of truth that all sincere inquirers will ultimately reach. Rather, truth is just a general property of the language that one decides to use—or the game one decides to play. In that case ‘truth’ corresponds to satisfying ‘truth conditions’ as specified by the rules of a given language, just as ‘goal’ corresponds to satisfying the rules of play in a given game.

To be sure, the positivists complicated matters because they also took seriously that science aspires to command universal assent for its knowledge claims, in which case science’s language needs to be set up in a way that enables everyone to transact their knowledge claims inside it; hence, the need to ‘reduce’ such claims to their calculable and measurable components. This effectively put the positivists in partial opposition to all the existing sciences of their day, each with its own parochial framework governed by the rules of its distinctive language game. The need to overcome this tendency explains the project of an ‘International Encyclopedia of Unified Science’.

In short, logical positivism was about designing an epistemic game—which they called ‘science’—that anyone could play and potentially win.

Given some of the things that Baker and Oreskes impute to me, they may be surprised to learn that I actually think that the logical positivists—as well as Mill and Popper—were on the right track. Indeed, I have always believed this. But these views have nothing to do with ‘veritism’, which I continue to put in scare quotes because, in the spirit of our times, it’s a bit of ‘fake philosophy’. It may work to shore up philosophical authority in public but fails to capture the conflicting definitions and criteria that philosophers themselves have offered not only for ‘truth’ but also for such related terms as ‘evidence’ and ‘validation’. All of these key epistemological terms are essentially contested concepts within philosophy. It is not simply that philosophers disagree on what is, say, ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’. (I summarize the issue here.)

Philosophical Fakeness

Richard Rorty became such a hate figure among analytic philosophers because he called out the ‘veritists’ on their fakeness. Yes, philosophers can tell you what truth is, but just as long as you accept a lot of contentious assumptions—and hope those capable of contending those assumptions aren’t in the room when you’re speaking!  Put another way, Rorty refused to adopt a ‘double truth’ doctrine for philosophy, whereby amongst themselves philosophers adopt a semi-detached attitude towards various conflicting conceptions of truth while at the same time presenting a united front to non-philosophers, lest these masses start to believe some disreputable things.

The philosophical ‘fakeness’ of veritism is exemplified in the following sentence, which appears in Baker and Oreskes’ (2017, 69) latest response:

On the contrary, truth (along with evidence, facts, and other words science studies scholars tend to relegate to scare quotes) is a far more plausible choice for one of a potential plurality of regulative ideals for an enterprise that, after all, does have an obviously cognitive function.

The sentence prima facie commits the category mistake of presuming that ‘truth’ is one more—albeit preferred—possible regulative ideal of science alongside, say, instrumental effectiveness, cultural appropriateness, etc. However, ‘truth’ in the logical positivist sense is a feature of all regulative ideals of science, each of which should be understood as specifying a language game that is governed by its own validation procedures—the rules of the game, if you will—in terms of which one theory is determined (or ‘verified’) to be, say, more effective than another or more appropriate than another.

Notice I said ‘prima facie.’ My guess is that when Baker and Oreskes say ‘truth’ is a regulative ideal of science, they are simply referring to a social arrangement whereby the self-organizing scientific community is the final arbiter on all knowledge claims accepted by society at large. As they point out, the scientific community can get things wrong—but things become wrong only when the scientific community says so, and they become fixed only when the scientific community says so. In short, under the guise of ‘truth’, Baker and Oreskes are advocating what I have called ‘cognitive authoritarianism’ (Fuller 1988, chapter 12).

Before ending with a brief discussion of what I think may be true about ‘veritism’, it is difficult not to notice the moralism associated with Baker and Oreskes’ invocation of ‘truth’. This carries over to such other pseudo-epistemic concepts as ‘trust’ and ‘reliability’, which are seen as marks of the scientific character, whereby ‘scientific’ attaches both to a body of knowledge and the people who produce that knowledge. I say ‘pseudo’ because there is no agreed measure of these qualities.

Regarding Trust

‘Trust’ is a quality whose presence is felt mainly as a double absence, namely, a studied refusal to examine knowledge claims for oneself which is subsequently judged to have had non-negative consequences.  (I have called trust a ‘phlogistemic’ concept for this reason, as it resembles the pseudo-element phlogiston, Fuller 1996). Indeed, in opposition to this general sensibility, I have gone so far as to argue that universities should be in the business of ‘epistemic trust-busting’. Here is my original assertion:

In short, universities function as knowledge trust-busters whose own corporate capacities of “creative destruction” prevent new knowledge from turning into intellectual property (Fuller 2002, 47; italics in original).

By ‘corporate capacities’, I meant the various means at the university’s disposal to ensure that the people in a position to take forward new knowledge are not simply part of the class of those who created it in the first place. More concretely, of course I have in mind ordinary teaching that aims to express even the most sophisticated concepts in terms ordinary students can understand and use. But also I mean to include ‘affirmative action’ policies that are specifically designed to incorporate a broader range of people than might otherwise attend the university. Taken together, these counteract the ‘neo-feudalism’ to which academic knowledge production is prone—‘rent-seeking’, if you will—which Baker and Oreskes appear unable to recognize.

As for ‘reliability’, it is a term whose meaning depends on specifying the conditions—say, in the design of an experiment—under which a pattern of behaviour is expected to occur. Outside of such tightly defined conditions, which is where most ‘scientific controversies’ happen, it is not clear how cases should be classified and counted, and hence what ‘reliable’ means. Indeed, STS has not only drawn attention to this fact but it has gone further—say, in the work of Harry Collins—to question whether even lab-based reliability is possible without some sort of collusion between researchers. In other words, the social accomplishment of ‘reliable knowledge’ is at least partly an expression of solidarity among members of the scientific community—a closing of the ranks, to put it less charitably.

An especially good example of the foregoing is what has been dubbed ‘Climategate’, which involved the releasing of e-mails from the UK’s main climate science research group in response to a journalist’s Freedom of Information request. While no wrongdoing was formally established, the e-mails did reveal the extent to which scientists from across the world effectively conspired to present the data for climate change in ways that obscured interpretive ambiguities, thereby pre-empting possible appropriations by so-called ‘climate change sceptics’. To be sure, from the symmetrical normative stance of classic STS, Climategate simply reveals the micro-processes by which a scientific consensus is normally and literally ‘manufactured’. Nevertheless, I doubt that Baker and Oreskes would turn to Climategate as their paradigm case of a ‘scientific consensus’. But why not?

The reason is that they refuse to acknowledge the labour that is involved in securing collective assent over any significant knowledge claim. As I observed in my original response (2017) to Baker and Oreskes, one might be forgiven for concluding from reading the likes of Merton, Habermas and others who see consensus formation as essential to science that an analogue of the ‘invisible hand’ is at play. On their telling, informed people draw the same conclusions from the same evidence. The actual social interaction of the scientists carries little cognitive weight in its own right. Instead it simply reinforces what any rational individual is capable of inferring for him- or herself in the same situation. At most, other people provide additional data points but they don’t alter the rules of right reasoning. Ironically, considering Baker and Oreskes’ allergic reaction to any talk of science as a market, this image of Homo scientificus to which they attach themselves seems rather like what they don’t like about Homo oeconomicus.

Climbing the Mountain

The contrasting view of consensus formation, which I uphold, is more explicitly ‘rhetorical’. It appeals to a mix of strategic and epistemic considerations in a setting where the actual interaction between the parties sets the parameters that defines the scope of any possible consensus. Although Kuhn also valorized consensus as the glue that holds together normal science puzzle-solving, to his credit he clearly saw its rhetorical and even coercive character, from pedagogy to peer review. For this reason, Kuhn is the one who STSers still usually cite as a precursor on this matter. Unlike Baker and Oreskes, he didn’t resort to the fake philosophy of ‘veritism’ to cover up the fact that truth is ultimately a social achievement.

Finally, I suggested that there may be a way of redeeming ‘veritism’ from its current status of fake philosophy. Just because ‘truth’ is what W.B. Gallie originally called an ‘essentially contested concept’, it doesn’t follow that it is a mere chimera. But how to resolve truth’s palpable diversity of conceptions into a unified vision of reality? The clue to redemption is provided by Charles Sanders Peirce, whose idea of truth as the final scientific consensus informs Baker and Oreskes’ normative orientation. Peirce equated truth with the ultimate theory of everything, which amounts to putting everything in its place, thereby resolving all the internal disagreements of perception and understanding that are a normal feature of any active inquiry. It’s the moment when the blind men in the Hindu adage discover the elephant they’ve been groping and (Popper’s metaphor) the climbers coming from different directions reach the same mountain top.[1]

Peirce’s vision was informed by his understanding of John Duns Scotus, the early fourteenth scholastic who provided a deep metaphysical understanding of Augustine’s Platonic reading of the Biblical Fall of humanity. Our ‘fallen’ state consists in the dismemberment of our divine nature, something that is regularly on display in the variability of humans with regard to the virtues, all of which God displays to their greatest extent. For example, the most knowledgeable humans are not necessarily the most benevolent. The journey back to God is basically one of putting these pieces—the virtues—back together again into a coherent whole.

At the level of organized inquiry, we find a similar fragmentation of effort, as the language game of each science exaggerates certain modes of access to reality at the expense of others. To be sure, Kuhn and STS accept, if not outright valorise, disciplinary specialisation as a mark of the increasing ‘complexification’ of the knowledge system. Not surprisingly, perhaps, they also downplay the significance in the sort of capital ‘T’ sense of ‘truth’ that Baker and Oreskes valorise. One obvious solution would be for defenders of ‘veritism’ to embrace an updated version of the ‘unified science’ project championed by the logical positivists, which aimed to integrate all forms of knowledge in terms of some common currency of intellectual exchange. (My earlier comments against ‘neo-feudal’ tendencies in academia should be seen in this light.) This would be the analogue of the original theological project of humanity reconstituting its divine nature, which Peirce secularised as the consensus theory of truth. Further considerations along these lines may be found here.

References

Baker, Erik and Naomi Oreskes. ‘Science as a Game, Marketplace or Both: A Reply to Steve Fuller.’ Social Epistemology Review and Reply Collective 6, no. 9 (2017): 65-69.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. ‘Recent Work in Social Epistemology.’ American Philosophical Quarterly 33: 149-66, 1996.

Fuller, Steve. Knowledge Management Foundations. Woburn MA: Butterworth-Heinemann, 2002.

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

Reisch, George A. ‘Did Kuhn Kill Logical Positivism?’ Philosophy of Science 58, no. 2 (1991): 264-277.

[1] One might also add the French word for ‘groping’, tâtonnement, common to Turgot’s and Walras’ understanding of how ‘general equilibrium’ is reached in the economy, as well as Theilard de Chardin’s conception of how God comes to be fully realized in the cosmos.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3JC

Please refer to:

Image credit: PGuiri, via flickr

What follows is an omnibus reply to various pieces that have been recently written in response to Fuller (2017), where I endorsed the post-truth idea of science as a game—an idea that I take to have been a core tenet of science and technology studies (STS) from its inception. The article is organized along conceptual lines, taking on Phillips (2017), Sismondo (2017) and Baker and Oreskes (2017) in roughly that order, which in turn corresponds to the degree of sympathy (from more to less) that the authors have with my thesis.

What It Means to Take Games Seriously

Amanda Phillips (2017) has written a piece that attempts to engage with the issues I raised when I encouraged STS to own the post-truth condition, which I take to imply that science in some deep sense is a ‘game’. What she writes is interesting but a bit odd, since in the end she basically proposes STS’s current modus operandi as if it were a new idea.  But we’ve already seen Phillips’ future, and it doesn’t work. But she’s far from alone, as we shall see.

On the game metaphor itself, some things need to be said. First of all, I take it that Phillips largely agrees with me that the game metaphor is appropriate to science as it is actually conducted. Her disagreement is mainly with my apparent recommendation that STS follow suit. She raises the introduction of the mortar kick into US football, which stays within the rules but threatens player safety. This leads her to conclude that the mortar kick debases/jeopardizes the spirit of the game. I may well agree with her on this point, which she wishes to present as akin to a normative stance appropriate to STS.  However, I cannot tell for sure, just given the evidence she provides. I’d also like to see whether she would have disallowed past innovations that changed the play of the game—and, if so, which ones. In other words, I need a clearer sense of what she takes to be the ‘spirit of the game’, which involves inter alia judgements about tolerable risks over a period of time.

To be sure, judicial decisions normally have this character. Sometimes judges issue ‘landmark decisions’ which may invalidate previous judges’ rulings but, in any case, set a precedent on the basis of which future decisions should be made. Bringing it back to the case at hand, Phillips might say that football has been violating its spirit for a long time and that not only should the mortar kick be prohibited but so too some other earlier innovations. (In US Constitutional law, this would be like the history of judicial interpretation of citizen rights following the passage of the Fourteenth Amendment, at least starting with Brown v. Board Education.) Of course, Phillips might instead give a more limited ruling that simply claims that the mortar kick is a step too far in the evolution of the game, which so far has stayed within its spirit. Or, she might simply judge the mortar kick to be within the spirit of the game, full stop. The arguments used to justify any of these decisions would be an exercise in elucidating what the ‘spirit of the game’ means.

I do not wish to be persnickety but to raise a point about what it means to think about science as a game. It means, at the very least, that science is prima facie an autonomous activity in the sense of having clear boundaries. Just as one knows when one is playing or not playing football, one knows when one is or is not doing science.  Of course, the impact that has on the rest of society is an open question. For example, once dedicated schools and degree programmes were developed to train people in ‘science’ (and here I mean the term in its academically broadest sense, Wissenschaft), especially once they acquired the backing and funding of nation-states, science became the source of ultimate epistemic authority in virtually all policy arenas. This was something that really only began to happen in earnest in the second half of the nineteenth century.

Similarly, one could imagine a future history of football, perhaps inspired by the modern Olympics, in which larger political units acquire an interest in developing the game as a way of resolving their own standing problems that might otherwise be handled with violence, sometimes on a mass scale. In effect, the Olympics would be a regularly scheduled, sublimated version of a world war. In that possible world, football—as one of the represented sports—would come to perform the functions for which armed conflict is now used. Here sports might take inspiration from the various science ‘races’ in which the Cold War was conducted—notably the race to the Moon—was a highly successful version of this strategy in real life, as it did manage to avert a global nuclear war. Its intellectual residue is something that we still call ‘game theory’.

But Phillips’ own argument doesn’t plumb the depths of the game metaphor in this way. Instead she has recourse to something she calls, inspired by Latour (2004), a ‘collective multiplicity of critical thought’. She also claims that STS hasn’t followed Latour on this point. As a matter of fact, STS has followed Latour almost religiously on this point, which has resulted in a diffusion of critical impact. The field basically amplifies consensus where it exists, showing how it has been maintained, and amplifies dissent where it exists, similarly showing how it has been maintained. In short, STS is simply the empirical shadow of the fields it studies. That’s really all that Latour ever meant by ‘following the actors’.

People forget that this is a man who follows Michel Serres in seeing the parasite as a role model for life (Serres and Latour 1995; cf. Fuller 2000: chap. 7). If STS seems ‘critical’, that’s only an unintended consequence of the many policy issues involving science and technology which remain genuinely unresolved. STS adds nothing to settle the normative standing of these matters. It simply elaborates them and in the process perhaps reminds people of what they might otherwise wish to forget or sideline. It is not a worthless activity but to accord it ‘critical’ in any meaningful sense would be to do it too much justice, as Latour (2004) himself realizes.

Have STSers Always Been Cheese-Eating Surrender Monkeys?

Notwithstanding the French accent and the Inspector Clouseau demeanour, Latour’s modus operandi is reminiscent of ordinary language philosophy, that intellectual residue of British imperialism, which in the mid-twentieth century led many intelligent people to claim that the sophisticated English practiced in Oxbridge common rooms cut the world at the joints. Although Ernest Gellner (1959) provided the consummate take-down of the movement—to much fanfare in the media at the time—ordinary language philosophy persisted well into the 1980s, along the way influencing the style of ethnomethodology that filtered into STS. (Cue the corpus of Michael Lynch.)

Ontology was effectively reduced to a reification of the things that the people in the room were talking about and the relations predicated of them. And where the likes of JL Austin and PF Strawson spoke of ‘grammatical usage’, Latour and his followers refer to ‘semiotic network’, largely to avoid the anthropomorphism from which the ordinary language philosophers had suffered—alongside their ethnocentrism. Nevertheless, both the ordinary language folks and Latour think they’re doing an empirically informed metaphysics, even though they’re really just eavesdropping on themselves and the people in whose company they’ve been recently kept. Latour (1992) is the classic expression of STS self-eavesdropping, as our man Bruno meditates on the doorstop, the seatbelt, the key and other mundane technologies with which he can never quite come to terms, which results in his life becoming one big ethnomethodological ‘breaching experiment’.

All of this is a striking retreat from STS’s original commitment to the Edinburgh School’s ‘symmetry principle’, which was presented as an intervention in epistemology rather than ontology. In this guise STS was seen as threatening rather than merely complementing the established normative order because the symmetry principle, notwithstanding its vaunted neutrality, amounted to a kind of judgemental relativism, whereby ‘winning’ in science was downgraded to a contingent achievement, which could have been—and might still be—reversed under different circumstances. This was the spirit in which Shapin and Schaffer (1985) appeared to be such a radical book: It had left the impression that the truth is no more than the binding outcome of a trial of people and things: that is, a ‘game’ in its full and demystified sense.

While I have always found this position problematic as an end in itself, it is nonetheless a great opening move to acquire an alternative normative horizon from that offered by the scientific establishment, since it basically amounts to an ‘equal time’ doctrine in an arena where opponents are too easily mischaracterised and marginalised, if not outright silenced by being ‘consigned to the dustbin of history’. Indeed, as Kuhn had recognized, the harder the science, the clearer the distinction between the discipline and its history.

However, this normative animus began to disappear from STS once Latour’s actor-network theory became the dominant school around the time of the Science Wars in the mid-1990s. It didn’t take long before STS had become supine to the establishment, exemplified by Latour (2004)’s uncritical acceptance of the phrase ‘artificially maintained controversies’, which no doubt meets with the approval of Eric Baker and Naomi Oreskes (Baker and Oreskes 2017). For my own part, when I first read Latour (2004), I was reminded of Donald Rumsfeld’s phrase from the same period, albeit in the context of France’s refusal to support the Iraq War: ‘cheese-eating surrender monkey’.

Nevertheless, Latour’s surrender has stood STS in good stead, rendering it a reliable reflector of all that it observes. But make no mistake: Despite the radical sounding rhetoric of ‘missing masses’ and ‘parliament of things’, STS in the Latourian moment follows closely in the footsteps of ordinary language philosophy, which enthusiastically subscribed to the Wittgensteinian slogan of ‘leaving the world alone’. The difference is that whereas the likes of Austin and Strawson argued that our normal ways of speaking contain many more insights into metaphysics than philosophers had previously recognized, Latour et al. show that taking seriously what appears before our eyes makes the social world much more complicated than sociologists had previously acknowledged. But the lesson is the same in both cases: Carry on treating the world as you find it as ultimate reality—simply be more sensitive to its nuances.

It is worth observing that ordinary language philosophy and actor-network theory, notwithstanding their own idiosyncrasies and pretensions, share a disdain for a kind of philosophy or sociology, respectively, that adopts a ‘second order’ perspective on its subject matter. In other words, they were opposed to what Strawson called ‘revisionary metaphysics’, an omnibus phrase that was designed to cover both German idealism and logical positivism, the two movements that did the most to re-establish the epistemic authority of academics in the modern era. Similarly, Latour’s hostility to a science of sociology in the spirit of Emile Durkheim is captured in the name he chose for his chair at Sciences Po, Gabriel Tarde, the magistrate who moved into academia and challenged Durkheim’s ontologically closed sense of sociology every step of the way. In both cases, the moves are advertised as democratising but in practice they’re parochialising, since those hidden nuances and missing masses are supposedly provided by acts of direct acquaintance.

Cue Sismondo (2017), who as editor of the journal Social Studies of Science operates in a ‘Latour Lite’ mode: that is, all of the method but none of the metaphysics. First, he understands ‘post-truth’ in the narrowest possible context, namely, as proposed by those who gave the phenomenon its name—and negative spin—to make it 2016 Oxford English Dictionary word of the year. Of course, that’s in keeping with the Latourian dictum of ‘Follow the agents’. But it is also to accept the agents’ categories uncritically, even if it means turning a blind eye to STS’s own role in promoting the epistemic culture responsible for ‘post-truth’, regardless of the normative value that one ultimately places on the word.

Interestingly, Sismondo is attacked on largely the same grounds by someone with whom I normally disagree, namely, Harry Collins (Collins, Evans, Weinel 2017). Collins and I agree that STS naturally lends itself to a post-truth epistemology, a fact that the field avoids at its peril. However, I believe that STS should own post-truth as a feature of the world that our field has helped to bring about—to be sure, not ex nihilo but by creatively deploying social and epistemological constructivism in an increasingly democratised context. In contrast, while Collins concedes that STS methods can be used even by our political enemies, he calls on STS to follow his own example by using its methods to demonstrate that ‘expert knowledge’ makes an empirical difference to the improvement of judgement in a variety of arenas. As for the politically objectionable uses of STS methods, here Collins and I agree that they are worth opposing but an adequate politics requires a different kind of work from STS research.

In response to all this, Sismondo retreats to STS’s official self-understanding as a field immersed the detailed practices of all that it studies—as opposed to those post-truth charlatans who simply spin words to create confusion. But the distinction is facile and perhaps disingenuous. The clearest manifestation that STS attends to the details of technoscientific practice is the complexity—or, less charitably put, complication—of its own language.  The social world comes to be populated by so many entities, properties and relations simply because STS research is largely in business of naming and classifying things, with an empiricist’s bias towards treating things that appear different to be really different. It is this discursive strategy that results in the richer ontology that one typically finds in STS articles, which in turn is supposed to leave the reader with the sense that the STS researcher has a deeper and more careful understanding of what s/he has studied. But in the end, it is just a discursive strategy, not a mathematical proof. There is a serious debate to be had about whether the field’s dedication to detail—‘ontological inventory work’—is truly illuminating or obfuscating. However, it does serve to establish a kind of ‘expertise’ for STS.

Why Science Has Never Had Need for Consensus—But Got It Anyway

My double question to anyone who wishes to claim a ‘scientific consensus’ on anything is on whose authority and on what basis such a statement is made. Even that great defender of science, Karl Popper, regarded scientific facts as no more than conventions, agreed mainly to mark temporary settlements in an ongoing journey. Seen with a rhetorician’s eye, a ‘scientific consensus’ is demanded only when scientific authorities feel that they are under threat in a way that cannot be dismissed by the usual peer review processes. ‘Science’ after all advertises itself as the freest inquiry possible, which suggests a tolerance for many cross-cutting and even contradictory research directions, all compatible with the current evidence and always under review in light of further evidence. And to a large extent, science does demonstrate this spontaneous embrace of pluralism, albeit with the exact options on the table subject to change. To be sure, some options are pursued more vigorously than others at any given moment. Scientometrics can be used to chart the trends, which may make the ‘science watcher’ seem like a stock market analyst. But this is more ‘wisdom of crowds’ stuff than a ‘scientific consensus’, which is meant to sound more authoritative and certainly less transient.

Indeed, invocations of a ‘scientific consensus’ become most insistent on matters which have two characteristics, which are perhaps necessarily intertwined but, in any case, take science outside of its juridical comfort zone of peer review: (1) they are inherently interdisciplinary; (2) they are policy-relevant. Think climate change, evolution, anything to do with health. A ‘scientific consensus’ is invoked on just these matters because they escape the ‘normal science’ terms in which peer review operates. To a defender of the orthodoxy, the dissenters appear to be ‘changing the rules of science’ simply in order to make their case seem more plausible. However, from the standpoint of the dissenter, the orthodoxy is artificially restricting inquiry in cases where reality doesn’t fit its disciplinary template, and so perhaps a change in the rules of science is not so out of order.

Here it is worth observing that defenders of the ‘scientific consensus’ tend to operate on the assumption that to give the dissenters any credence would be tantamount to unleashing mass irrationality in society. Fortified by the fledgling (if not pseudo-) science of ‘memetics’, they believe that an anti-scientific latency lurks in the social unconscious. It is a susceptibility typically fuelled by religious sentiments, which the dissenters threaten to awaken, thereby reversing all that modernity has achieved.

I can’t deny that there are hints of such intent in the ranks of dissenters. One notorious example is the Discovery Institute’s ‘Wedge document’, which projected the erosion of ‘methodological naturalism’ as the ‘thin edge of the wedge’ to return the US to its Christian origins. Nevertheless, the paranoia of the orthodoxy underestimates the ability of modernity—including modern science—to absorb and incorporate the dissenters, and come out stronger for it. The very fact that intelligent design theory has translated creationism into the currency of science by leaving out the Bible entirely from its argumentation strategy should be seen as evidence for this point. And now Darwinists need to try harder to defeat it, which we see in their increasingly sophisticated refutations, which often end up with Darwinists effectively conceding points and simply admitting that they have their own way of making their opponents’ points, without having to invoke an ‘intelligent designer’.

In short, my main objection to the concept of a ‘scientific consensus’ is that it is epistemologically oversold. It is clearly meant to carry more normative force than whatever happens to be the cutting edge of scientific fashion this week. Yet, what is the life expectancy of the theories around which scientists congregate at any given time?  For example, if the latest theory says that the planet is due for climate meltdown within fifty years, what happens if the climate theories themselves tend to go into meltdown after about fifteen years? To be sure, ‘meltdown’ is perhaps too strong a word. The data are likely to remain intact and even be enriched, but their overall significance may be subject to radical change. Moreover, this fact may go largely unnoticed by the general public, as long as the scientists who agreed to the last consensus are also the ones who agree to the next consensus. In that case, they can keep straight their collective story of how and why the change occurred—an orderly transition in the manner of dynastic succession.

What holds this story together—and is the main symptom of epistemic overselling of scientific consensus—is a completely gratuitous appeal to the ‘truth’ or ‘truth-seeking’ (aka ‘veritism’) as somehow underwriting this consensus. Baker and Oreskes’ (2017) argument is propelled by this trope. Yet, interestingly early on even they refer to ‘attempts to build public consensus about facts or values’ (my emphasis). This turn of phrase comports well with the normal constructivist sense of what consensus is. Indeed, there is nothing wrong with trying to align public opinion with certain facts and values, even on the grand scale suggested by the idea of a ‘scientific consensus’. This is the stuff of politics as usual. However, whatever consensus is thereby forged—by whatever means and across whatever range of opinion—has no ‘natural’ legitimacy. Moreover, it neither corresponds to some pre-existent ideal of truth nor is composed of some invariant ‘truth stuff’ (cf. Fuller 1988: chap. 6). It is a social construction, full stop. If the consensus is maintained over time and space, it will not be due to its having been blessed and/or guided by ‘Truth’; rather it will be the result of the usual social processes and associated forms of resource mobilization—that is, a variety of external factors which at crucial moments impinge on the play of any game.

The idea that consensus enjoys some epistemologically more luminous status in science than in other parts of society (where it might be simply dismissed as ‘groupthink’) is an artefact of the routine rewriting of history that scientists do to rally their troops. As Kuhn long ago observed, scientists exaggerate the degree of doctrinal agreement to give forward momentum to an activity that is ultimately held together simply by common patterns of disciplinary acculturation and day-to-day work practices. Nevertheless, Kuhn’s work helped to generate the myth of consensus. Indeed, in my Cambridge days studying with Mary Hesse (circa 1980), the idea that an ultimate consensus on the right representation of reality might serve as a transcendental condition for the possibility of scientific inquiry was highly touted, courtesy of the then fashionable philosopher Jürgen Habermas, who flattered his Anglophone fans by citing Charles Sanders Peirce as his source for the idea. Yet even back then I was of a different mindset.

Under the influence of Foucault, Derrida and social constructivism (which were circulating in more underground fashion), as well as what I had already learned about the history of science (mainly as a student of Loren Graham at Columbia), I deemed the idea of a scientific consensus to reflect a secular ‘god of the gaps’ style of wishful thinking. Indeed I devoted a chapter of my Ph.D. on the ‘elusiveness’ of consensus in science, which was the only part of the thesis that I incorporated in Social Epistemology (Fuller 1988: chap. 9). It is thus very disappointing to see Baker and Oreskes continuing to peddle Habermas’ brand of consensus mythology, even though for many of us it had fallen still born from the presses more than three decades ago.

A Gaming Science Is a Free Science

Baker and Oreskes (2017) are correct to pick up on the analogy drawn by David Bloor between social constructivism’s scepticism with regard to transcendent conceptions of truth and value and the scepticism that the Austrian school of economics (and most economists generally) show to the idea of a ‘just price’, understood as some normative ideal that real prices should be aiming toward. Indeed, there is more than an analogy here. Alfred Schutz, teacher of Peter Berger and Thomas Luckmann of The Social Construction of Reality fame, was himself a member of the Mises Circle in Vienna, having been trained by him the law faculty. Market transactions provided the original template for the idea of ‘social construction’, a point that is already clear in Adam Smith.

However, in criticizing Bloor’s analogy, Baker and Oreskes miss a trick: When the Austrians and other economists talk about the normative standing of real prices, their understanding of the market is somewhat idealized; hence, one needs a phrase like ‘free market’ to capture it. This point is worth bearing in mind because it amounts to a competing normative agenda to the one that Baker and Oreskes are promoting. With the slow ascendancy of neo-liberalism over the second half of the twentieth century, that normative agenda became clear—namely, to make markets free so that real prices can prevail.

Here one needs to imagine that in such a ‘free market’ there is a direct correspondence between increasing the number of suppliers in the market and the greater degree of freedom afforded to buyers, as that not only drives the price down but also forces buyers to refine their choice. This is the educative function performed by markets, an integral social innovation in terms of the Enlightenment mission advanced by Smith, Condorcet and others in the eighteenth century (Rothschild 2002). Markets were thus promoted as efficient mechanisms that encourage learning, with the ‘hand’ of the ‘invisible hand’ best understood as that of an instructor. In this context, ‘real prices’ are simply the actual empirical outcomes of markets under ‘free’ conditions. Contra Baker and Oreskes, they don’t correspond to some a priori transcendental realm of ‘just prices’.

However, markets are not ‘free’ in the requisite sense as long as the state strategically blocks certain spontaneous transactions, say, by placing tariffs on suppliers other than the officially licensed ones or by allowing a subset of market agents to organize in ways that enable them to charge tariffs to outsiders who want access. In other words, the free market is not simply about lower taxes and fewer regulations. It is also about removing subsidies and preventing cartels. It is worth recalling that Adam Smith wrote The Wealth of Nations as an attack on ‘mercantilism’, an economic system not unlike the ‘socialist’ ones that neo-liberalism has tried to overturn with its appeal to the ‘free market’. In fact, one of the early neo-liberals (aka ‘ordo-liberals’), Alexander Rüstow, coined the phrase ‘liberal interventionism’ in the 1930s for the strong role that he saw for the state in freeing the marketplace, say, by breaking up state-protected monopolies (Jackson 2009).

Capitalists defend private ownership only as part of the commodification of capital, which in turn, allows trade to occur. Capitalists are not committed to an especially land-oriented approach to private property, as in feudalism, which through, say, inheritance laws restricts the flow of capital in order to stabilise the social order. To be sure, capitalism requires that traders know who owns what at any given time, which in turn supports clear ownership signals. However, capitalism flourishes only if the traders are inclined to part with what they already own to acquire something else. After all, wealth cannot grow if capital doesn’t circulate. The state thus serves capitalism by removing the barriers that lead people to accept too easily their current status as an adaptive response to situations that they regard as unchangeable. Thus, liberalism, the movement most closely aligned with the emerging capitalist sensibility, was originally called ‘radical’—from the Latin for ‘root’—as it promised to organize society according to humanity’s fundamental nature, the full expression of which was impeded by existing regimes, which failed to allow everyone what by the twentieth century would be called ‘equal opportunity’ in life (Halevy 1928).

I offer this more rounded picture of the normative agenda of free market thinkers because Baker and Oreskes engage in a rhetorical sleight of hand associated with the capitalists’ original foes, the mercantilists. It involves presuming that the public interest is best served by state authorised producers (of whatever). Indeed, when one speaks of the early modern period in Europe as the ‘Age of Absolutism’, this elision of the state and the public is an important part of what is meant. True to its Latin roots, the ‘state’ is the anchor of stability, the stationary frame of reference through which everything else is defined. Here one immediately thinks of Newton, but metaphysically more relevant was Hobbes whose absolutist conception of the state aimed to incarnate the Abrahamic deity in human form, the literal body of which is the body politic.

Setting aside the theology, mercantilism in practice aimed to reinvent and rationalize the feudal order for the emerging modern age, one in which ‘industry’ was increasingly understood as not a means to an end but an end in itself—specifically, not simply a means to extract the fruits of nature but an expression of human flourishing. Thus, political boundaries on maps started to be read as the skins of superorganisms, which by the nineteenth century came to be known as ‘nation-states’. In that case, the ruler’s job was not simply to keep the peace over what had been largely self-managed tracts of land, but rather to ‘organize’ them so that they functioned as a single productive unit, what we now call the ‘economy’, whose first theorization was as ‘physiocracy’. The original mercantilist policy involved royal licenses that assigned exclusive rights to a ‘domain’ understood in a sense that was not restricted to tracts of land, but extended to wealth production streams in general. To be sure, over time these rights were attenuated into privileges and subsidies, which allowed for some competition but typically on an unequal basis.

In contrast, capitalism’s ‘liberal’ sensibility was about repurposing the state’s power to prevent the rise of new ‘path dependencies’ in the form of, say, a monopoly in trade based on an original royal license renewed in perpetuity, which would only serve to reduce the opportunities of successive generations. It was an explicitly anti-feudal policy. The final frontier to this policy sensibility is academia, which has long been acknowledged to be structured in terms of what Robert Merton called the principle of ‘cumulative advantage’, the sources of which are manifold and, to a large extent, mutually reinforcing. To list just a few: (1) state licenses issued to knowledge producers, starting with the Charter of the Royal Society of London, which provided a perpetually protected space for a self-organizing community to do as they will within originally agreed constraints; (2) Kuhn-style paradigm-driven normal science, which yields to a successor paradigm only out of internal collapse, not external competition; (3) the anchoring effect of early academic training on subsequent career advancement, ranging from jobs to grants; (4) the evaluation of academic work in terms of a peer review system whose remit extends beyond catching errors to judging relevance to preferred research agendas; (5) the division of knowledge into ‘fields’ and ‘domains’, which supports a florid cartographic discourse of ‘boundary work’ and ‘boundary maintenance’.

The list could go on, but the point is clear to anyone with eyes to see: Even in these neo-liberal times, academia continues to present its opposition to neo-liberalism in the sort of neo-feudal terms that would have pleased a mercantilist. Lineage is everything, whatever the source of ancestral entitlement. Merton’s own attitude towards academia’s multiple manifestations of ‘cumulative advantage’ seemed to be one of ambivalence, though as a sociologist he probably wasn’t sufficiently critical of the pseudo-liberal spin put on cumulative advantage as the expression of the knowledge system’s ‘invisible hand’ at work—which seems to be Baker and Oreskes’ default position as defenders of the scientific status quo. However, their own Harvard colleague, Alex Csiszar (2017) has recently shown that Merton recognized that the introduction of the scientometrics in the 1960s—in the form of the Science Citation Index—made academia susceptible to a tendency that he had already identified in bureaucracies, ‘goal displacement’, whereby once a qualitative goal is operationalized in terms of a quantitative indicator, there is an incentive to work toward the indicator, regardless of its actual significance for achieving the original goal. Thus, the cumulative effect of high citation counts become surrogates for ‘truth’ or some other indicator-transcendent goal. In this real sense, what is at best the wisdom of the scientific crowd is routinely mistaken for an epistemically luminous scientific consensus.

As I pointed out in Fuller (2017), which initiated this recent discussion of ‘science as game’, a great virtue of the game idea is its focus on the reversibility of fortunes, as each match matters, not only to the objective standing of the rival teams but also to their subjective sense of momentum. Yet, from their remarks about intelligent design theory, Baker and Oreskes appear to believe that the science game ends sooner than it really does: After one or even a series of losses, a team should simply pack it in and declare defeat. Here it is worth recalling that the existence of atoms and the relational character of space-time—two theses associated with Einstein’s revolution in physics—were controversial if not deemed defunct for most of the nineteenth century, notwithstanding the problems that were acknowledged to exist in fully redeeming the promises of the Newtonian paradigm. Indeed, for much of his career, Ernst Mach was seen as a crank who focussed too much on the lost futures of past science, yet after the revolutions in relativity and quantum mechanics his reputation flipped and he became known for his prescience. Thus, the Vienna Circle that spawned the logical positivists was named in Mach’s honour.

Similarly intelligent design may well be one of those ‘controversial if not defunct’ views that will be integral to the next revolution in biology, since even biologists whom Baker and Oreskes probably respect admit that there are serious explanatory gaps in the Neo-Darwinian synthesis.[1] That intelligent design advocates have improved the scientific character of their arguments from their creationist origins—which I am happy to admit—is not something for the movement’s opponents to begrudge. Rather it shows that they learn from their mistakes, as any good team does when faced with a string of losses. Thus, one should expect an improvement in their performance. Admittedly these matters become complicated in the US context, since the Constitution’s separation of church and state has been interpreted in recent times to imply the prohibition of any teaching material that is motivated by specifically religious interests, as if the Founding Fathers were keen on institutionalising the genetic fallacy! Nevertheless, this blinkered interpretation has enabled the likes of Baker and Oreskes to continue arguing with earlier versions of ‘intelligent design creationism’, very much like generals whose expertise lies in having fought the previous war. But luckily, an increasingly informed public is not so easily fooled by such epistemically rearguard actions.

References

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

Collins, Harry, Robert Evans, Martin Weinel. “STS as Science or Politics?” Social Studies of Science.  47, no. 4 (2017): 580–586.

Csiszar, Alex. “From the Bureaucratic Virtuoso to Scientific Misconduct: Robert K. Merton, Robert and Eugene Garfield, and Goal Displacement in Science.” Paper delivered to annual meeting of the History of Science Society. Toronto: 9-12 November 2017.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2000.

Fuller, Steve. “Is STS All Talk and No Walk?” EASST Review 36 no. 1 (2017): https://easst.net/article/is-sts-all-talk-and-no-walk/.

Gellner, Ernest. Words and Things. London: Routledge, 1959.

Halevy, Elie. The Growth of Philosophic Radicalism. London: Faber and Faber, 1928.

Jackson, Ben. “At the Origins of Neo-Liberalism: The Free Economy and the Strong State, 1930-47.” Historical Journal 53, no. 1 (2010): 129-51.

Latour, Bruno. “Where are the Missing Masses? The Sociology of a Few Mundane Artefacts.” In Shaping Technology/Building Society, edited by Wiebe E. Bijker and John Law, 225-258. Cambridge MA: MIT Press. 1992

Latour, Bruno. ‘Why has critique run out of steam? From matters of fact to matters of concern’. Critical Inquiry 30, no. 2 (2004): 225–248.

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

Rothschild, Emma. Economic Sentiments. Cambridge MA: Harvard University Press, 2002.

Serres, Michel. and Bruno Latour. Conversations on Science, Culture, and Time. Ann Arbor: University of Michigan Press, 1995.

Schaffer, Simon and Steven Shapin. Leviathan and the Air-Pump. Princeton: Princeton University Press, 1985.

Sismondo, Sergio. “Not a Very Slippery Slope: A Reply to Fuller.” EASST Review 36, no. 2 (2017): https://easst.net/article/not-a-very-slippery-slope-a-reply-to-fuller/.

[1] Surprisingly for people who claim to be historians of science, Baker and Oreskes appear to have fallen for the canard that only Creationists mention Darwin’s name when referring to contemporary evolutionary theory. In fact, it is common practice among historians and philosophers of science to invoke Darwin to refer to his specifically purposeless conception of evolution, which remains the default metaphysical position of contemporary biologists—albeit one maintained with increasing conceptual and empirical difficulty. Here it is worth observing that such leading lights of the Discovery Institute as Stephen Meyer and Paul Nelson were trained in the history and philosophy of science, as was I.

Author Information:Erik Baker and Naomi Oreskes, Harvard University, ebaker@g.harvard.edu, oreskes@fas.harvard.edu

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.”[1] Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3FB

Please refer to:

Image credit: Walt Stoneburner, via flickr

In late April, 2017, the voice of a once-eminent institution of American democracy issued a public statement that embodied the evacuation of norms of truth and mutual understanding from American political discourse that since the 2016 presidential election has come to be known as “post-truth.” We aren’t talking about Donald Trump, whose habitual disregard of factual knowledge is troubling, to be sure, and whose advisor, Kellyanne Conway, made “alternative facts” part of the lexicon. Rather, we’re referring to the justification issued by New York Times opinion page editor James Bennet in defense of his decision to hire columnist Bret Stephens, a self-styled “climate agnostic,” and his spreading talking points of the fossil fuel industry-funded campaign to cast doubt on the scientific consensus on climate change and the integrity of climate scientists.[2] The notion of truth made no appearance in Bennet’s statement. “If all of our columnists and all of our contributors and all of our editorials agreed all the time,” he explained, “we wouldn’t be promoting the free exchange of ideas, and we wouldn’t be serving our readers very well.”[3] The intellectual merits of Stephens’ position are evidently not the point. What counts is only the ability to grease the gears of the “free exchange of ideas.”

Bennet’s defense exemplifies the ideology of the “marketplace of ideas,” particularly in its recent, neoliberal incarnation. Since the 1970s, it has become commonplace throughout much of Europe and America to evince suspicion of attempts to build public consensus about facts or values, regardless of motivation, and to maintain that the role of public-sphere institutions—including newspapers and universities—is simply to place as many private opinions as possible into competition (“free exchange”) with one another.[4] If it is meaningful to talk about a “post-truth” moment, this ideological development is surely among its salient facets. After all, “truth” has not become any more or less problematic as an evaluative concept in private life, with its countless everyday claims about the world. Only public truth claims, especially those with potential to form a basis for collective action, now seem newly troublesome. To the extent that the rise of “post-truth” holds out lessons for science studies, it is not because the discipline has singlehandedly swung a wrecking ball through conventional epistemic wisdom (as some practitioners would perhaps like to imagine[5]), but because the broader rise of marketplace-of-ideas thinking has infected even some of its most subversive-minded work.

Science as Game

In this commentary, we address and critique a concept commonly employed in theoretical science studies that is relevant to the contemporary situation: science as game. While we appreciate both the theoretical and empirical considerations that gave rise to this framework, we suggest that characterizing science as a game is epistemically and politically problematic. Like the notion of a broader marketplace of ideas, it denies the public character of factual knowledge about a commonly accessible world. More importantly, it trivializes the significance of the attempt to obtain information about that world that is as right as possible at a given place and time, and can be used to address and redress significant social issues. The result is the worst of both worlds, permitting neither criticism of scientific claims with any real teeth, nor the possibility of collective action built on public knowledge.[6] To break this stalemate, science studies must become more comfortable using concepts like truth, facts, and reality outside of the scare quotes to which they are currently relegated, and accepting that the evaluation of knowledge claims must necessarily entail normative judgments.[7]

Philosophical talk of “games” leads directly to thoughts of Wittgenstein, and to the scholar most responsible for introducing Wittgenstein to science studies, David Bloor. While we have great respect for Bloor’s work, we suggest that it carries uncomfortable similarities between the concept of science as a game in science studies and the neoliberal worldview. In his 1997 Wittgenstein, Rules and Institutions, Bloor argues for an analogy between his interpretation of the later Wittgenstein’s theory of meaning (central to Bloor’s influential writing on science) and the theory of prices of the neoliberal pioneer Ludwig von Mises. “The notion of the ‘real meaning’ of a concept or a sign deserves the same scorn as economists reserve for the outdated and unscientific notion of the ‘real’ or ‘just’ price of a commodity,” Bloor writes. “The only real price is the price paid in the course of real transactions as they proceed von Fall zu Fall. There is no standard outside these transactions.”[8] This analogy is the core of the marketplace of ideas concept, as it would later be developed by followers of von Mises, particularly Friedrich von Hayek. Just as there is no external standard of value in the world of commodities, there is no external standard of truth, such as conformity to an empirically accessible reality, in the world of science.[9] It is “scientism” (a term that von Hayek popularized) to invoke support for scientific knowledge claims outside of the transactions of the marketplace of ideas. Just as, for von Hayek and von Mises, the notion of economic justice falls in the face of the wisdom of the marketplace, so too does the notion of truth, at least as a regulative ideal to which any individual or finite group of people can sensibly aspire.

Contra Bloor (and von Hayek), we believe that it is imperative to think outside the sphere of market-like interactions in assessing both commodity prices and conclusions about scientific concepts. The prices of everything from healthcare and housing to food, education and even labor are hot-button political and social issues precisely because they affect people’s lives, sometimes dramatically, and because markets do not, in fact, always values these goods and services appropriately. Markets can be distorted and manipulated. People may lack the information necessary to judge value (something Adam Smith himself worried about). Prices may be inflated (or deflated) for reasons that bear little relation to what people value. And, most obviously in the case of environmental issues, the true cost of economic activity may not be reflected in market prices, because pollution, health costs, and other adverse effects are externalized. There is a reason why Nicholas Stern, former chief economist of the World Bank, has called climate change the “greatest market failure ever seen.”[10] Markets can and do fail. Prices do not always reflect value. Perhaps most important, markets refuse justice and fairness as categories of analysis. As Thomas Piketty has recently emphasized, capitalism typically leads to great inequalities of wealth, and this can only be critiqued by invoking normative standards beyond the values of the marketplace.[11]

External normative standards are indispensable in a world where the outcome of the interactions within scientific communities matter immensely to people outside those communities. This requirement functions both in the defense of science, where appropriate, and the critique of it.[12] The history of scientific racism and sexism, for example, speaks to the inappropriateness of public deference to all scientific claims, and the necessity of principled critique.[13] Yet, the indispensability of scientific knowledge to political action in contemporary societies also demands the development of standards that justify public acceptance of certain scientific claims as definitive enough to ground collective projects, such as the existence of a community-wide consensus or multiple independent lines of evidence for the same conclusion.[14] (Indeed, we regard the suggestion of standards for the organization of scientific communities by Helen Longino as one of the most important contributions of the field of social epistemology.[15])

Although we reject any general equivalency between markets and scientific communities, we agree they are indeed alike in one key way: they both need regulation. As Jürgen Habermas once wrote in critique of Wittgenstein, “language games only work because they presuppose idealizations that transcend any particular language game; as a necessary condition of possibly reaching understanding, these idealizations give rise to the perspective of an agreement that is open to criticism on the basis of validity claims.”[16] Collective problem-solving requires that these sorts of external standards be brought to bear. The example of climate change illustrates our disagreement with Bloor (and von Mises) on both counts in one fell swoop. Though neither of us is a working economist, we nonetheless maintain that it is rational—on higher-order grounds external to the social “game” of the particular disciplines—for governments to impose a price on carbon (i.e., a carbon tax or emissions trading system), in part because we accept that the natural science consensus on climate change accurately describes the physical world we inhabit, and the social scientific consensus that a carbon pricing system could help remedy the market failure that is climate change.[17]

Quietism and Critique

We don’t want to unfairly single out Bloor. The science-as-game view—and its uncomfortable resonances with marketplace-of-ideas ideology—crops up in the work of many prominent science studies scholars, even some who have quarreled publicly with Bloor and the strong programme. Bruno Latour, for example, one of Bloor’s sharpest critics, draws Hayekian conclusions from different methodological premises. While Bloor invokes social forces to explain the outcome of scientific games,[18] Latour rejects the very idea of social forces. Rather, he claims, as Margaret Thatcher famously insisted, that “there is no such thing as ‘the social’ or ‘a society.’”[19] But whereas Thatcher at least acknowledged the existence of family, for Latour there are only monadic actants, competing “agonistically” with each other until order spontaneously emerges from the chaos, just as in a game of Go (an illustration of which graces the cover of his seminal first book Laboratory Life, with Steve Woolgar).[20] Social structures, evaluative norms, even “publics,” in his more recent work, are all chimeras, devoid of real meaning until this networked process has come to fulfillment. If that view might seem to make collective action for wide-reaching social change difficult to conceive, Latour agrees: “Seen as networks, … the modern world … permits scarcely anything more than small extensions of practices, slight accelerations in the circulation of knowledge, a tiny extension of societies, miniscule increases in the number of actors, small modifications of old beliefs.”[21] Rather than planning political projects with any real vision or bite—or concluding that a particular status-quo might be problematic, much less illegitimate—one should simply be patient, play the never-ending networked game, and see what happens.[22] But a choice for quietism is a choice nonetheless—“we are condemned to act,” as Immanuel Wallerstein once put it—one that supports and sustains the status quo.[23] Moreover, a sense of humility or fallibility by no means requires us to exaggerate the inevitability of the status quo or yield to the power of inertia.[24]

Latour has at least come clean about his rejection of any aspiration to “critique.”[25] But others who haven’t thrown in the towel have still been led into a similar morass by their commitment to a marketlike or playful view of science. The problem is that, if normative judgments external to the game are illegitimate, analysts are barred from making any arguments for or against particular views or practices. Only criticism of their premature exclusion from the marketplace is permitted. This standpoint interprets Bloor’s famous call for symmetry not so much as a methodological principle in intellectual analysis, but as a demand for the abandonment of all forms of epistemic and normative judgment, leading to the bizarre sight of scholars championing a widely-criticized “scientific” or intellectual cause while coyly refusing to endorse its conclusions themselves. Thus we find Bruno Latour praising the anti-environmentalist Breakthrough Institute while maintaining that he “disagrees with them all the time;” Sheila Jasanoff defending the use of made-to-order “litigation science” in courtrooms on the grounds of a scrupulous “impartiality” that rejects scholarly assessments of intellectual integrity or empirical adequacy in favor of letting “the parties themselves do more of the work of demarcation;” and Steve Fuller defending creationists’ insistence that their views should be taught in American science classrooms while remaining ostensibly “neutral” on the scientific question at issue.[26]

Fuller’s defense of creationism, in particular, shows the way that calls for “impartiality” are often in reality de facto side-taking: Fuller takes rhetorical tropes directly out of the creationist playbook, including his tendentious and anachronistic labelling of modern evolutionary biologists as “Darwinists.” Moreover, despite his explicit endorsement of the game view of science, Fuller refuses to accept defeat for the intelligent design project, either within the putative game of science, or in the American court system, which has repeatedly found the teaching of creationism to be unconstitutional. Moreover, Fuller’s insistence that creationism somehow has still not received a “fair run for its money” reveals that even he cannot avoid importing external standards (in this case fairness) to evaluate scientific results! After all, who ever said that science was fair?

In short, science studies scholars’ ascetic refusal of standards of good and bad science in favor of emergent judgments immanent to the “games” they analyze has vitiated critical analysis in favor of a weakened proceduralism that has struggled to resist the recent advance of neoliberal and conservative causes in the sciences. It has led to a situation where creationism is defended as an equally legitimate form of science, where the claims of think tanks that promulgate disinformation are equated with the claims of academic scientific research institutions, and corporations that have knowingly suppressed information pertinent to public health and safety are viewed as morally and epistemically equivalent to the plaintiffs who are fighting them. As for Fuller, leaving the question of standards unexamined and/ or implicit, and relying instead on the rhetoric of the “game,” enables him to avoid the challenge of defending a demonstrably indefensible position on its actual merits.

Where the Chips Fall

In diverse cases, key evaluative terms—legitimacy, disinformation, precedent, evidence, adequacy, reproducibility, natural (vis-à-vis supernatural), and yes, truth—have been so relativized and drained of meaning that it starts to seem like a category error even to attempt to refute equivalency claims. One might argue that this is alright: as scholars, we let the chips fall where they may. The problem, however, is that they do not fall evenly. The winner of this particular “game” is almost always status quo power: the conservative billionaires, fossil fuel companies, lead and benzene and tobacco manufacturers and others who have bankrolled think tanks and “litigation science” at the cost of biodiversity, human health and even human lives.[27] Scientists paid by the lead industry to defend their toxic product are not just innocently trying to have their day in court; they are trying to evade legal responsibility for the damage done by their products. The fossil fuel industry is not trying to advance our understanding of the climate system; they are trying to block political action that would decrease societal dependence on their products. But there is no way to make—much less defend—such claims without a robust concept of evidence.

Conversely, the communities, already victimized by decades of poverty and racial discrimination, who rely on reliable science in their fight for their children’s safety are not unjustly trying to short-circuit a process of “demarcation” better left to the adversarial court system.[28] It is a sad irony that STS, which often sees itself as championing the subaltern, has now in many cases become the intellectual defender of those who would crush the aspirations of ordinary people.

Abandoning the game view of science won’t require science studies scholars to reinvent the wheel, much less re-embrace Comtean triumphalism. On the contrary, there are a wide variety of perspectives from the history of epistemology, philosophy of science, and feminist, anti-racist, and anti-colonialist theory that permit critique that can be both epistemic and moral. One obvious source, championed by intellectual historians such as James Kloppenberg and philosophers such as Hilary Putnam and Jürgen Habermas, is the early American pragmatism of John Dewey and William James, a politically constructive alternative to both naïve foundationalism and the textualist rejection of the concept of truth found in the work of more recent “neo-pragmatists” like Richard Rorty.[29] Nancy Cartwright, Thomas Uebel, and John O’Neill have similarly reminded us of the intellectual and political potential in the (widely misinterpreted, when not ignored) “left Vienna Circle” philosophy of Otto Neurath.[30]

In a slightly different vein, Charles Mills, inspired in part by the social science of W.E.B. Du Bois, has insisted on the importance of a “veritistic” epistemological stance in characterizing the ignorance produced by white supremacy.[31] Alison Wylie has emphasized the extent to which many feminist critics of science “are by no means prepared to concede that their accounts are just equal but different alternatives to those they challenge,” but in fact often claim that “research informed by a feminist angle of vision … is simply better in quite conventional terms.”[32] Steven Epstein’s work on AIDS activism demonstrates that social movements issuing dramatic challenges to biomedical and scientific establishments can make good use of unabashed claims to genuine knowledge and “lay” expertise. Epstein’s work also serves as a reminder that moral neutrality is not the only, much less the best, route to rigorous scholarship.[33] Science studies scholars could also benefit from looking outside their immediate disciplinary surroundings to debates about poststructuralism in the analysis of (post)colonialism initiated by scholars like Benita Parry and Masao Miyoshi, as well as the emerging literature in philosophy and sociology about the relationship of the work of Michel Foucault to neoliberalism.[34]

For our own part, we have been critically exploring the implications of the institutional and financial organization of science during the Cold War and the recent neoliberal intensification of privatization in American society.[35] We think that this work suggests a further descriptive inadequacy in the science-as-game view, in addition to the normative inadequacies we have already described. In particular, it drives home the extent to which the structure of science is not constant. From the longitudinal perspective available to history, as opposed to sociological or ethnographic snapshot, it is possible to resolve the powerful societal forces—government, industry, and so on—driving changes in the way science operates, and to understand the way those scientific changes relate to broader political-economic imperatives and transformations. Rather than throwing up one’s hands and insisting that incommensurable particularity is all there is, science studies scholars might instead take a theoretical position that will allow us to characterize and respond to the dramatic transformations of academic work that are happening right now, and from which the humanities are by no means exempt.[36]

Academics must not treat themselves as isolated from broader patterns of social change, or worse, deny that change is a meaningful concept outside of the domain of microcosmic fluctuations in social arrangements. Powerful reactionary forces can reshape society and science (and reshape society through science) in accordance with their values; progressive movements in and outside of science have the potential to do the same. We are concerned that the “game” view of science traps us instead inside a Parmenidean field of homogenous particularity, an endless succession of games that may be full of enough sound and fury to interest scholars but still signify nothing overall.

Far from rendering science studies Whiggish or simply otiose, we believe that a willingness to discriminate, outside of scare quotes, between knowledge and ignorance or truth and falsity is vital for a scholarly agenda that respects one of the insights that scholars like Jasanoff have repeatedly and compellingly championed: in contemporary democratic polities, science matters. In a world where physicists state that genetic inferiority is the cause of poverty among black Americans, where lead paint manufacturers insist that their product does no harm to infants and children, and actresses encourage parents not to vaccinate their children against infectious diseases, an inability to discriminate between information and disinformation—between sense and nonsense (as the logical positivists so memorably put it)—is not simply an intellectual failure. It is a political and moral failure as well.

The Brundtland Commission famously defined “sustainable development” as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Like the approach we are advocating here, this definition treats the empirical and the normative as enfolded in one another. It sees them not as constructions that emerge stochastically in the fullness of time, but as questions that urgently demand robust answers in the present. One reason science matters so much in the present moment is its role in determining which activities are sustainable, and which are not. But if scientists are to make such judgments, then we, as science studies scholars, must be able to judge the scientists—positively as well as critically. Lives are at stake. We are not here merely to stand on the sidelines insisting that all we can do is ensure that all voices are heard, no matter how silly, stupid, or nefarious.

[1] We would like to thank Robert Proctor, Mott Greene, and Karim Bschir for reading drafts and providing helpful feedback on this piece.

[2] For an analysis of Stephens’ column, see Robert Proctor and Steve Lyons, “Soft Climate Denial at The New York Times,” Scientific American, May 8, 2017; for the history of the campaign to cast doubt on climate change science, see Naomi Oreskes and Erik M. Conway, Merchants of Doubt (Bloomsbury Press, 2010); for information on the funding of this campaign, see in particular Robert J. Bruelle, “Institutionalizing delay: foundation funding and the creation of U.S. climate change counter-movement organizations,” Climatic Change 122 (4), 681–694, 2013.

[3] Accessible at https://twitter.com/ErikWemple/status/858737313601507329.

[4] For the recency of the concept, see Stanley Ingber, “The Marketplace of Ideas: A Legitimizing Myth,” Duke Law Journal, February 1984. The significance of the epistemological valorization of the marketplace of ideas to the broader neoliberal project has been increasingly well-understood by historians of neoliberalism; it is an emphasis, for instance, to the approach taken by the contributors to Philip Mirowski and Dieter Plehwe, eds., The Road from Mont Pèlerin (Harvard, 2009), especially Mirowski’s “Postface.”

[5] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[6] See for instance John Ziman, Public Knowledge: An Essay Concerning the Social Dimension of Science (Cambridge University Press, 1968); as well as the many more recent perspectives we hold up below as exemplary of alternative approaches.

[7] Naomi Oreskes and Erik M. Conway. “Perspectives on global warming: A Book Symposium with Steven Yearley, David Mercer, and Andy Pitman.” Metascience vol. 21, pp. 531-559, 2012.

[8] David Bloor, Wittgenstein, Rules and Institutions (Routledge, 1997), pp. 76-77.

[9] As suggested by Helen Longino in The Future of Knowledge (Princeton University Press, 2001) as an alternative to the more vexed notion of “correspondence,” wrought with metaphysical difficulties Longino hopes to skirt. In Austrian economics, this rejection of the search for empirical, factual knowledge initially took the form, in von Mises’ thought, of the ostensibly purely deductive reasoning he called “praxaeology,” which was supposed to analytically uncover the imminent principles governing the economic game. Von Hayek went further, arguing that economics at its most rigorous merely theoretically explicates the limits of positive knowledge about empirical social realities. See, for instance, Friedrich von Hayek, “On Coping with Ignorance,” Ludwig von Mises Lecture, 1978.

[10] Nicholas H. Stern, The Economics of Climate Change: The Stern Review (Cambridge University Press, 2007).

[11] Thomas Piketty, Capital in the Twenty-First Century (Harvard/Belknap, 2013). In addition to critiquing market outcomes, philosophers have also invoked concepts of justice and fairness to challenge the extension of markets to new domains; see for example Michael Sandel, What Money Can’t Buy: The Moral Limits of Markets (Farrar, Straus, and Giroux, 2013) and Harvey Cox, The Market as God (Harvard University Press, 2016). This is also a theme in the Papal Encyclical on Climate Change and Inequality, Laudato Si. https://laudatosi.com/watch

[12] For more on this point, see Naomi Oreskes, “Systematicity is Necessary but Not Sufficient: On the Problem of Facsimile Science,” in press, Synthèse.

[13] See among others Helen Longino, Science as Social Knowledge (Princeton University Press, 1990); Londa Schiebinger, Has Feminism Changed Science? (Harvard University Press, 1999); Sandra Harding, Science and Social Inequality: Feminist and Postcolonial Issues (University of Illinois Press, 2006); Donna Haraway, Primate Visions: Gender, Race, and Nature in the World of Modern Science (Routledge, 1989); Evelynn Hammonds and Rebecca Herzig, The Nature of Difference: Sciences of Race in the United States from Jefferson to Genomics (MIT Press, 2008).

[14] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Naomi Oreskes, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?” in Joseph F. C. DiMento and Pamela Doughman, eds., Climate Change: What It Means for Us, Our Children, and Our Grandchildren (MIT Press, 2007), pp. 65-99.

[15] Helen Longino, Science as Social Knowledge (Princeton University Press, 1990), and The Future of Knowledge (Princeton University Press, 2001).

[16] Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984), p. 199.

[17] See, for instance, Naomi Oreskes, “Without government, the market will not solve climate change: Why a meaningful carbon tax may be our only hope,” Scientific American (December 22, 2015), Naomi Oreskes and Jeremy Jones, “Want to protect the climate? Time for carbon pricing,” Boston Globe (May 3, 2017).

[18] Along with a purportedly empirical component that, as Latour has compellingly argued, is “canceled out” out of the final analysis because of its common presence to both parties in a dispute. See Bruno Latour, “For Bloor and Beyond: a Reply to David Bloor’s Anti-Latour,” Studies in History and Philosophy of Science, vol. 30 (1), pp.113-129, March 1998.

[19] Bruno Latour, Reassembling the Social: An Introduction to Actor-Network Theory (Oxford University Press, 2007), p. 5; this theme is an emphasis of his entire oeuvre. On Thatcher, see http://briandeer.com/social/thatcher-society.htm and James Meek, Private Island (Verso, 2014).

[20] Bruno Latour and Steve Woolgar, Laboratory Life: The Construction of Scientific Facts (Routledge, 1979/1986); Bruno Latour, Science in Action (Harvard University Press, 1987). In Laboratory Life this emergence of order from chaos is explicitly analyzed as the outcome of a kind of free market in scientific “credit.” Spontaneous order is one of the foundational themes of Hayekian thought, and the game of Go is an often-employed analogy there as well. See, for instance, Peter Boettke, “The Theory of Spontaneous Order and Cultural Evolution in the Social Theory of F.A. Hayek,” Cultural Dynamics, vol. 3 (1), pp. 61-83, 1990; Gustav von Hertzen, The Spirit of the Game (CE Fritzes AB, 1993), especially chapter 4.

[21] Bruno Latour, We Have Never Been Modern (Harvard University Press, 1993), pp. 47-48; for his revision of the notion of the public, see for example Latour’s Politics of Nature (Harvard University Press, 2004). For a more in-depth discussion of Latour vis-à-vis neoliberalism, see Philip Mirowski, “What Is Science Critique? Part 1: Lessig, Latour,” keynote address to Workshop on the Changing Political Economy of Research and Innovation, UCSD, March 2015.

[22] Our criticism here is not merely hypothetical. Latour’s long-time collaborator Michel Callon and the legal scholar David S. Caudill, for example, have both used Latourian actor-network theory to argue that critics of the privatization of science such as Philip Mirowski are mistaken and analysts should embrace, or at least concede the inevitability of, “hybrid” science that responds strongly to commercial interests. See Michel Callon, “From Science as an Economic Activity to Socioeconomics of Scientific Research,” in Philip Mirowski and Esther-Mirjam Sent, eds. Science Bought and Sold (University of Chicago Press, 2002); and David S. Caudill, “Law, Science, and the Economy: One Domain?” UC Irvine Law Review vol. 5 (393), pp. 393-412, 2015.

[23] Immanuel Wallerstein, The Essential Wallerstein (The New Press, 2000), p. 432.

[24] Naomi Oreskes, “On the ‘reality’ and reality of anthropogenic climate change,” Climatic Change vol. 119, pp. 559-560, 2013, especially p. 560 n. 4. Many philosophers have made this point. Hilary Putnam, for example, has argued that fallibilism actually demands a critical attitude, one that seeks to modify beliefs for which there is sufficient evidence to believe that they are mistaken, while also remaining willing to make genuine knowledge claims on the basis of admittedly less-than-perfect evidence. See his Realism with a Human Face (Harvard University Press, 1990), and Pragmatism: An Open Question (Oxford, 1995) in particular.

[25] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[26] “Bruno Latour: Modernity is a Politically Dangerous Goal,” November 2014 interview with Latour by Patricia Junge, Colombina Schaeffer and Leonardo Valenzuela of Verdeseo; Zoë Corbyn, “Steve Fuller : Designer trouble,” The Guardian (January 31, 2006); Sheila Jasanoff, “Representation and Re-Presentation in Litigation Science,” Environmental Health Perspectives 116(1), pp. 123–129, January 2008. Fuller also has a professional relationship with the Breakthrough Institute, but the Institute seems somewhat fonder, in their publicity materials, of their connection with Latour.

[27] Even creationism, it’s worth remembering, is a big-money movement. The Discovery Institute, perhaps the most prominent “intelligent design” advocacy organization, is bankrolled largely by wealthy Republican donors, and was co-founded by notorious Reaganite supply-side economics guru and telecom deregulation champion George Gilder. See Jodi Wilgoren, “Politicized Scholars Put Evolution on the Defensive,” New York Times, August 21, 2005. Similarly, so-called grassroots anti-tax organizations often had links to the tobacco industry. See http://www.sourcewatch.org/index.php/Americans_for_Tax_Reform_and_Big_Tobacco The corporate exploitation of ambiguity about the contours of disinformation can, of course, also take more anodyne forms, as in manipulative use of phrases like “natural flavoring” on food packaging. We thank Mott Greene for this example.

[28] David Rosner and Gerald Markowitz, Lead Wars: The Politics of Science and the Fate of America’s Children (University of California Press, 2013). See also Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution (University of California Press, 2nd edition 2013); and Stanton Glantz, ed., The Cigarette Papers (University of California Press, 1998).

[29] See James Kloppenburg, “Pragmatism: An Old Name for Some New Ways of Thinking?,” The Journal of American History, Vol. 83 (1), pp. 100-138, June 1996, which argues that Rorty misrepresents in many ways the core insights of the early pragmatists. See also Jürgen Habermas, Theory of Communicative Action (Beacon Press, vol. 1 1984, vol. 2 1987); Hilary Putnam, Reason, Truth, and History (Cambridge University Press, 1981); see also William Rehg’s development of Habermas’s ideas on science in Cogent Science in Context: The Science Wars, Argumentation Theory, and Habermas (MIT Press, 2009).

[30] Nancy Cartwright, Jordi Cat, Lola Fleck, and Thomas Uebel, Otto Neurath: Philosophy between Science and Politics (Cambridge University Press, 1996); Thomas Uebel, “Political philosophy of science in logical empiricism: the left Vienna Circle,” Studies in History and Philosophy of Science, vol. 36, pp. 754-773, 2005; John O’Neill, “Unified science as political philosophy: positivism, pluralism and liberalism,” Studies in History and Philosophy of Science, vol. 34, pp. 575-596, 2003.

[31] Charles Mills, “White Ignorance,” in Robert Proctor and Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (Stanford University Press, 2008); see also his recent Black Rights/White Wrongs (Oxford University Press, 2017).

[32] Alison Wylie, Thinking from Things: Essays in the Philosophy of Archaeology (University of California Press, 2002), p. 190. Helen Longino (Science as Social Knowledge, 1999) and Sarah Richardson (Sex Itself, University of Chicago Press, 2013), have made similar arguments about research in endocrinology and genetics.

[33] Steven Epstein, Impure Science (University of California Press, 1996); see especially pp. 13-14.

[34] See for instance Benita Parry, Postcolonial Studies: A Materialist Critique (Routledge, 2004); Masao Miyoshi, “Ivory Tower in Escrow,” boundary 2, vol. 27 (1), pp. 7-50, Spring 2000. On Foucault, see recently Daniel Zamora and Michael C. Behrent, eds., Foucault and Neoliberalism (Polity Press, 2016); but note also the seeds of this critique in earlier works such as Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984) and Nancy Fraser, “Michel Foucault: A ‘Young Conservative’?”, Ethics vol 96 (1), pp. 165-184, 1985, and “Foucault on Modern Power: Empirical Insights and Normative Confusions,” Praxis International, vol. 3, pp. 272-287, 1981.

[35] Naomi Oreskes and John Krige, eds., Science and Technology in the Global Cold War (MIT Press, 2015); Naomi Oreskes, Science on a Mission: American Oceanography in the Cold War (University of Chicago Press, forthcoming); Erik Baker, “The Ultimate Think Tank: Money and Science at the Santa Fe Institute,” manuscript in preparation.

[36] See, for instance, Philip Mirowski, Science-Mart (Harvard University Press, 2010); Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (MIT Press, 2015); Henry Giroux, Neoliberalism’s War on Higher Education (Haymarket Books, 2014); Sophia McClennen, “Neoliberalism and the Crisis of Intellectual Engagement,” Works and Days, vols. 26-27, 2008-2009.