Search Results For "Steve Fuller"

Author Information: Steve Fuller, University of Warwick, s.w.fuller@warwick.ac.uk.

Fuller, Steve. “‘China’ As the West’s Other in World Philosophy.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 1-11.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42x

A man practices Taijiquan at the Kongzi Temple in Nanjing.
Image by Slices of Light via Flickr / Creative Commons

 

This essay was previously published in the Journal of World Philosophy, their Summer 2018 issue.

Bryan Van Norden’s Taking Back Philosophy: A Multicultural Manifesto draws on his expertise in Chinese philosophy to launch a comprehensive and often scathing critique of contemporary Anglo-American philosophy. I focus on the sense in which “China” figures as a “non-Western culture” in Van Norden’s argument. Here I identify an equivocation between what I call a “functional” and a “substantive” account of culture.

I argue that Van Norden, like perhaps most others who have discussed Chinese philosophy, presupposes a “functional” conception, whereby the relevant sense in which “China” matters is exactly as “non-Western,” which ends up incorporating some exogenous influences such as Indian Buddhism but not any of the Western philosophies that made major inroads in the twentieth century. I explore the implications of the functional/substantive distinction for the understanding of cross-cultural philosophy generally.

Dragging the West Into the World

I first ran across Bryan Van Norden’s understanding of philosophy from a very provocative piece entitled “Why the Western Philosophical Canon Is Xenophobic and Racist,”[1]  which trailed the book now under review. I was especially eager to review it because I had recently participated in a symposium in the Journal of World Philosophies that discussed Chinese philosophy—Van Norden’s own area of expertise—as a basis for launching a general understanding of world philosophy.[2]

However, as it turns out, most of the book is preoccupied with various denigrations of philosophy in contemporary America, from both inside and outside the discipline. The only thing I will say about this aspect of the book is that, even granting the legitimacy of Van Norden’s complaints, I don’t think that arguments around some “ontological” conception of what philosophy “really is” will resolve the matter because these can always be dismissed as self-serving and question-begging.

What could make a difference is showing that a broader philosophical palette would actually make philosophy graduates more employable in an increasingly globalized world. Those like Van Norden who oppose the “Anglo-analytic hegemony” in contemporary philosophy need to argue explicitly that it results in philosophy punching below its weight in terms of potential impact. That philosophy departments of the most analytic sort continue to survive and even flourish, and that their students continue to be employed, should be presented as setting a very low standard of achievement.

After all, philosophy departments tend to recruit students with better than average qualifications, while the costs for maintaining those departments remain relatively low. In contrast, another recent book that raises similar concerns to Van Norden’s, Socrates Tenured (Frodeman and Briggle 2016),[3] is more successful in pointing to extramural strategies for philosophy to pursue a more ambitious vision of general societal relevance.

Challenging How We Understand Culture Itself

But at its best, Taking Back Philosophy forces us to ask: what exactly does “culture” mean in “multicultural” or “cross-cultural” philosophy? For Van Norden, the culture he calls “China” is the exemplar of a non-Western philosophical culture. It refers primarily—if not exclusively—to those strands of Chinese thought associated with its ancient traditions. To be sure, this arguably covers everything that Chinese scholars and intellectuals wrote about prior to the late nineteenth century, when Western ideas started to be regularly discussed. It would then seem to suggest that “China” refers to the totality of its indigenous thought and culture.

But this is not quite right, since Van Norden certainly includes the various intellectually productive engagements that Buddhism as an alien (Indian) philosophy has had with the native Confucian and especially Daoist world-views. Yet he does not seem to want to include the twentieth-century encounters between Confucianism and, say, European liberalism and American pragmatism in the Republican period or Marxism in the Communist period. Here he differs from Leigh Jenco (2010),[4] who draws on the Republican Chinese encounter with various Western philosophies to ground a more general cross-cultural understanding of philosophy.

It would appear that Van Norden is operating with a functional rather than substantive conception of “China” as a philosophical culture. In other words, he is less concerned with all the philosophy that has happened within China than with simply the philosophy in China that makes it “non-Western.” Now some may conclude that this makes Van Norden as ethnocentric as the philosophers he criticizes.

I am happy to let readers judge for themselves on that score. However, functional conceptions of culture are quite pervasive, especially in the worlds of politics and business, whereby culture is treated as a strategic resource to provide a geographic region with what the classical political economist David Ricardo famously called “comparative advantage” in trade.

But equally, Benedict Anderson’s (1983) influential account of nationalism as the construction of “imagined communities” in the context of extricating local collective identities from otherwise homogenizing imperial tendencies would fall in this category. Basically your culture is what you do that nobody else does—or at least does not do as well as you. However, your culture is not the totality of all that you do, perhaps not even what you do most of the time.

To be sure, this is not the classical anthropological conception of culture, which is “substantive” in the sense of providing a systematic inventory of what people living in a given region actually think and do, regardless of any overlap with what others outside the culture think and do. Indeed, anthropologists in the nineteenth and most of the twentieth centuries expected that most of the items in the inventory would come from the outside, the so-called doctrine of “diffusionism.”

Thus, they have tended to stress the idiosyncratic mix of elements that go into the formation of any culture over any dominant principle. This helps explain why nowadays every culture seems to be depicted as a “hybrid.” I would include Jenco’s conception of Chinese culture in this “substantive” conception.

However, what distinguished, say, Victorians like Edward Tylor from today’s “hybrid anthropologists” was that the overlap of elements across cultures was used by the former as a basis for cross-cultural comparisons, albeit often to the detriment of the non-Western cultures involved. This fuelled ambitions that anthropology could be made into a “science” sporting general laws of progress, etc.

My point here is not to replay the history of the struggle for anthropology’s soul, which continues to this day, but simply to highlight a common assumption of the contesting parties—namely, that a “culture” is defined exclusively in terms of matters happening inside a given geographical region, in which case things happening outside the region must be somehow represented inside the region in order to count as part of a given culture. In contrast, the “functional” conception defines “culture” in purely relational terms, perhaps even with primary reference to what is presumed to lie outside a given culture.

Matters of Substance and Function

Both the substantive and the functional conception derive from the modern core understanding of culture, as articulated by Johann Gottfried Herder and the German Idealists, which assumed that each culture possesses an “essence” or “spirit.” On the substantive conception, which was Herder’s own, each culture is distinguished by virtue of having come from a given region, as per the etymological root of “culture” in “agriculture.” In that sense, a culture’s “essence” or “spirit” is like a seed that can develop in various ways depending on the soil in which it is planted.

Indeed, Herder’s teacher, Kant had already used the German Keime (“seeds”) in a book of lectures whose title is often credited with having coined “anthropology” (Wilson 2014).[5] This is the sense of culture that morphs into racialist ideologies. While such racialism can be found in Kant, it is worth stressing that his conception of race does not depend on the sense of genetic fixity that would become the hallmark of twentieth-century “scientific racism.” Rather, Kant appeared to treat “race” as a diagnostic category for environments that hold people back, to varying degrees, from realizing humanity’s full potential.

Here Kant was probably influenced by the Biblical dispersal of humanity, first with Adam’s Fall and then the Noachian flood, which implied that the very presence of different races or cultures marks our species’ decline from its common divine source. Put another way, Kant was committed to what Lamarck called the “inheritance of acquired traits,” though Lamarck lacked Kant’s Biblical declinist backdrop. Nevertheless, they agreed that a sustainably radical change to the environment could decisively change the character of its inhabitants. This marks them both as heirs to the Enlightenment.

To be sure, this reading of Kant is unlikely to assuage either today’s racists or, for that matter, anti-racists or multiculturalists, since it doesn’t assume that the preservation of racial or cultural identity possesses intrinsic (positive or negative) value. In this respect, Kant’s musings on race should be regarded as “merely historical,” based on his fallible second-hand knowledge of how peoples in different parts of the world have conducted their lives.

In fact, the only sense of difference that the German Idealists unequivocally valued was self-individuation, which is ultimately tied to the functional conception of culture, whereby my identity is directly tied to my difference from you. It follows that the boundaries of culture—or the self, for that matter—are moveable feasts. In effect, as your identity changes, mine does as well—and vice versa.

Justifying a New World Order

This is the metaphysics underwriting imperialism’s original liberal capitalist self-understanding as a global free-trade zone. In its ideal form, independent nation-states would generate worldwide prosperity by continually reorienting themselves to each other in response to market pressures. Even if the physical boundaries between nation-states do not change, their relationship to each other would, through the spontaneous generation and diffusion of innovations.

The result would be an ever-changing global division of labor. Of course, imperialism in practice fostered a much more rigid—even racialized—division of labor, as Marxists from Lenin onward decried. Those who nevertheless remain hopeful in the post-imperial era that the matter can ultimately be resolved diagnose the problem as one of “uneven development,” a phrase that leaves a sour aftertaste in the mouths of “post-colonialists.”

But more generally, “functionalism” as a movement in twentieth-century anthropology and sociology tended towards a relatively static vision of social order. And perhaps something similar could be said about Van Norden’s stereotyping of “China.” However, he would be hardly alone. In his magisterial The Sociology of Philosophies: A Global Theory of Intellectual Change, a book which Van Norden does not mention, Randall Collins (1998)[6] adopts a similarly functionalist stance. There it leads to a quite striking result, which has interesting social epistemological consequences.

Although Collins incorporates virtually every thinker that Chinese philosophy experts normally talk about, carefully identifying their doctrinal nuances and scholastic lineages, he ends his treatment of China at the historical moment that happens to coincide with what he marks as a sea change in the fortunes of Western philosophy, which occurs in Europe’s early modern period.

I put the point this way because Collins scrupulously avoids making any of the sorts of ethnocentric judgements that Van Norden rightly castigates throughout his book, whereby China is seen as un- or pre-philosophical. However, there is a difference in attitude to philosophy that emerges in Europe, less in terms of philosophy’s overall purpose than its modus operandi. Collins calls it rapid discovery science.

Rapid discovery science is the idea that standardization in the expression and validation of knowledge claims—both quantitatively and qualitatively—expedites the ascent to higher levels of abstraction and reflexivity by making it easier to record and reproduce contributions in the ongoing discourse. Collins means here not only the rise of mathematical notation to calculate and measure, but also “technical languages,” the mastery of which became the mark of “expertise” in a sense more associated with domain competence than with “wisdom.” In the latter case, the evolution of “peer review” out of the editorial regimentation of scientific correspondence in the early journals played a decisive role (Bazerman 1987).[7]

Citation conventions, from footnotes to bibliographies, were further efficiency measures. Collins rightly stresses the long-term role of universities in institutionalizing these innovations, but of more immediate import was the greater interconnectivity within Europe that was afforded by the printing press and an improved postal system. The overall result, so I believe, was that collective intellectual memory was consolidated to such an extent that intellectual texts could be treated as capital, something to both build upon and radically redeploy—once one has received the right training to access them. These correspond to the phases that Thomas Kuhn called “normal” and “revolutionary” science, respectively.

To be sure, Collins realizes that China had its own stretches in which competing philosophical schools pursued higher levels of abstraction and reflexivity, sometimes with impressive results. But these were maintained solely by the emotional energy of the participants who often dealt with each other directly. Once external events dispersed that energy, then the successors had to go back to a discursive “ground zero” of referring to original texts and reinventing arguments.

Can There Be More Than One Zero Point?

Of course, the West has not been immune to this dynamic. Indeed, it has even been romanticized. A popular conception of philosophy that continues to flourish at the undergraduate level is that there can be no genuine escape from origins, no genuine sense of progress. It is here that Alfred North Whitehead’s remark that all philosophy is footnotes to Plato gets taken a bit too seriously.

In any case, Collins’ rapid discovery science was specifically designed to escape just this situation, which Christian Europe had interpreted as the result of humanity’s fallen state, a product of Adam’s “Original Sin.” This insight figured centrally in the Augustinian theology that gradually—especially after the existential challenge that Islam posed to Christendom in the thirteenth century—began to color how Christians viewed their relationship to God, the source of all knowing and being. The Protestant Reformation marked a high watermark in this turn of thought, which became the crucible in which rapid discovery science was forged in the seventeenth century. Since the 1930s, this period has been called the “Scientific Revolution” (Harrison 2007).[8]

In the wake of the Protestant Reformation, all appeals to authority potentially became not sources of wisdom but objects of suspicion. They had to undergo severe scrutiny, which at the time were often characterized as “trials of faith.” Francis Bacon, the personal lawyer to England’s King James I, is a pivotal figure because he clearly saw continuity from the Inquisition in Catholic Europe (which he admired, even though it ensnared his intellectual ally Galileo), through the “witch trials” pursued by his fellow Protestants on both sides of the Atlantic, to his own innovation—the “crucial experiment”—which would be subsequently enshrined as the hallmark of the scientific method, most energetically by Karl Popper.

Bacon famously developed his own “hermeneutic of suspicion” as proscriptions against what he called “idols of the mind,” that is, lazy habits of thought that are born of too much reliance on authority, tradition, and surface appearances generally. For Bacon and his fellow early modern Christians, including such Catholics as Rene Descartes, these habits bore the mark of Original Sin because they traded on animal passions—and the whole point of the human project is to rise above our fallen animal natures to recover our divine birthright.

The cultural specificity of this point is often lost, even on Westerners for whom the original theological backdrop seems no longer compelling. What is cross-culturally striking about the radical critique of authority posed by the likes of Bacon and Descartes is that it did not descend into skepticism, even though—especially in the case of Descartes—the skeptical challenge was explicitly confronted. What provided the stopgap was faith, specifically in the idea that once we recognize our fallen nature, redemption becomes possible by finding a clearing on which to build truly secure foundations for knowledge and thereby to redeem the human condition, God willing.

For Descartes, this was “cogito ergo sum.” To be sure, the “God willing” clause, which was based on the doctrine of Divine Grace, became attenuated in the eighteenth century as “Providence” and then historicized as “Progress,” finally disappearing altogether with the rising tide of secularism in the nineteenth century (Löwith 1949; Fuller 2010: chap. 8).[9]

But its legacy was a peculiar turn of mind that continually seeks a clearing to chart a path to the source of all meaning, be it called “God” or “Truth.” This is what makes three otherwise quite temperamentally different philosophers—Husserl, Wittgenstein, and Heidegger—equally followers in Descartes’ footsteps. They all prioritized clearing a space from which to proceed over getting clear about the end state of the process.

Thus, the branches of modern Western philosophy concerned with knowledge—epistemology and the philosophy of science—have been focused more on methodology than axiology, that is, the means rather than the ends of knowledge. While this sense of detachment resonates with, say, the Buddhist disciplined abandonment of our default settings to become open to a higher level of state of being, the intellectual infrastructure provided by rapid discovery science allows for an archive to be generated that can be extended and reflected upon indefinitely by successive inquirers.

Common Themes Across Continents

A good way to see this point is that in principle the Buddhist and, for that matter, the Socratic quest for ultimate being could be achieved in one’s own lifetime with sufficient dedication, which includes taking seriously the inevitability of one’s own physical death. In contrast, the modern Western quest for knowledge—as exemplified by science—is understood as a potentially endless intergenerational journey in which today’s scientists effectively lead vicarious lives for the sake of how their successors will regard them.

Indeed, this is perhaps the core ethic promoted in Max Weber’s famous “Science as a Vocation” lecture (Fuller 2015: chap. 3).[10] Death as such enters, not to remind scientists that they must eventually end their inquiries but that whatever they will have achieved by the end of their lives will help pave the way for others to follow.

Heidegger appears as such a “deep” philosopher in the West because he questioned the metaphysical sustainability of the intellectual infrastructure of rapid discovery science, which the Weberian way of death presupposes. Here we need to recall that Heidegger’s popular reception was originally mediated by the postwar Existentialist movement, which was fixated on the paradoxes of the human condition thrown up by Hiroshima, whereby the most advanced science managed to end the biggest war in history by producing a weapon with the greatest chance of destroying humanity altogether in the future. Not surprisingly, Heidegger has proved a convenient vehicle for Westerners to discover Buddhism.

Early Outreach? Or Appropriation?

Finally, it is telling that the Western philosopher whom Van Norden credits with holding China in high esteem, Leibniz, himself had a functional understanding of China. To be sure, Leibniz was duly impressed by China’s long track record of imperial rule at the political, economic, and cultural levels, all of which were the envy of Europe. But Leibniz honed in on one feature of Chinese culture—what he took to be its “ideographic” script—which he believed could provide the intellectual infrastructure for a global project of organizing and codifying all knowledge so as to expedite its progress.

This was where he thought China had a decisive “comparative advantage” over the West. Clearly Leibniz was a devotee of rapid discovery science, and his project—shared by many contemporaries across Europe—would be pursued again to much greater effect two hundred years later by Paul Otlet, the founder of modern library and information science, and Otto Neurath, a founding member of the logical positivist movement.

While the Chinese regarded their written characters as simply a medium for people in a far-flung empire to communicate easily with each other, Leibniz saw in them the potential for collaboration on a universal scale, given that each character amounted to a picture of an abstraction, the metaphorical rendered literal, a message that was not simply conveyed but embedded in the medium. It seemed to satisfy the classical idea of nous, or “intellectual intuition,” as a kind of perception, which survives in the phrase, “seeing with the mind’s eye.”

However, the Chinese refused to take Leibniz’s bait, which led him to begin a train of thought that culminated in the so-called Needham Thesis, which turns on why Earth’s most advanced civilization, China, failed to have a “Scientific Revolution” (Needham 1969; Fuller 1997: chap. 5).[11] Whereas Leibniz was quick to relate Chinese unreceptiveness to his proposal to their polite but firm rejection of the solicitations of Christian missionaries, Joseph Needham, a committed Marxist, pointed to the formal elements of the distinctive cosmology promoted by the Abrahamic religions, especially Christianity, that China lacked—but stopping short of labelling the Chinese “heathens.”

An interesting feature of Leibniz’s modus operandi is that he saw cross-cultural encounters as continuous with commerce (Perkins 2004).[12]  No doubt his conception was influenced by living at a time when the only way a European could get a message to China was through traders and missionaries, who typically travelled together. But he also clearly imagined the resulting exchange as a negotiation in which each side could persuade the other to shift their default positions to potential mutual benefit.

This mentality would come to be crucial to the dynamic mentality of capitalist political economy, on which Ricardo’s theory of comparative advantage was based. However, the Chinese responded to their European counterparts with hospitality but only selective engagement with their various intellectual and material wares, implying their unwillingness to be fluid with what I earlier called “self-individuation.”

Consequently, Europeans only came to properly understand Chinese characters in the mid-nineteenth century, by which time it was treated as a cultural idiosyncrasy, not a platform for pursuing universal knowledge. That world-historic moment for productive engagement had passed—for reasons that Marxist political economy adequately explains—and all subsequent attempts at a “universal language of thought” have been based on Indo-European languages and Western mathematical notation.

China is not part of this story at all, and continues to suffer from that fact, notwithstanding its steady ascendancy on the world stage over the past century. How this particular matter is remedied should focus minds interested in a productive future for cross-cultural philosophy and multiculturalism more generally. But depending on what we take the exact problem to be, the burden of credit and blame across cultures will be apportioned accordingly.

Based on the narrative that I have told here, I am inclined to conclude that the Chinese underestimated just how seriously Europeans like Leibniz took their own ideas. This in turn raises some rather deep questions about the role that a shift in the balance of plausibility away from “seeing with one’s own eyes” and towards “seeing with the mind’s eye” has played in the West’s ascendancy.

Conclusion

I began this piece by distinguishing a “substantive” and a “functional” approach to culture because even theorists as culturally sensitive as Van Norden and Collins adopt a “functional” rather than a “substantive” approach. They defend and elaborate China as a philosophical culture in purely relational terms, based on its “non-Western” character.

This leads them to include, say, Chinese Buddhism but not Chinese Republicanism or Chinese Communism—even though the first is no less exogenous than the second two to “China,” understood as the land mass on which Chinese culture has been built over several millennia. Of course, this is not to take away from Van Norden’s or Collins’ achievements in reminding us of the continued relevance of Chinese philosophical culture.

Yet theirs remains a strategically limited conception designed mainly to advance an argument about Western philosophy. Here Collins follows the path laid down by Leibniz and Needham, whereas Van Norden takes that argument and flips it against the West—or, rather, contemporary Western philosophy. The result in both cases is that “China” is instrumentalized for essentially Western purposes.

I have no problem whatsoever with this approach (which is my own), as long as one is fully aware of its conceptual implications, which I’m not sure that Van Norden is. For example, he may think that his understanding of Chinese philosophical culture is “purer” than, say, Leigh Jenco’s, which focuses on a period with significant Western influence. However, this is “purity” only in the sense of an “ideal type” of the sort the German Idealists would have recognized as a functionally differentiated category within an overarching system.

In Van Norden’s case, that system is governed by the West/non-West binary. Thus, there are various ways to be “Western” and various ways to be “non-Western” for Van Norden. Van Norden is not sufficiently explicit about this logic. The alternative conceptual strategy would be to adopt a “substantive” approach to China that takes seriously everything that happens within its physical borders, regardless of origin. The result would be the more diffuse, laundry list approach to culture that was championed by the classical anthropologists, for which “hybrid” is now the politically correct term.

To be sure, this approach is not without its own difficulties, ranging from a desire to return to origins (“racialism”) to forced comparisons between innovator and adopter cultures. But whichever way one goes on this matter, “China” remains a contested concept in the context of world philosophy.

Contact details: s.w.fuller@warwick.ac.uk

References

Bazerman, Charles. Shaping Written Knowledge. Madison WI: University of Wisconsin Press, 1987.

Collins, Randall. The Sociology of Philosophies: A Global Theory of Intellectual Change. Cambridge MA: Harvard University Press, 1998.

Frodeman, Robert; Adam Briggle. Socrates Tenured. Lanham MD: Rowman and Littlefield, 2016).

Fuller, Steve. Science: Concepts in the Social Sciences. Milton Keynes UK: Open University Press, 1997.

Fuller, Steve. Science: The Art of Living. Durham UK: Acumen, 2010.

Fuller, Steve. Knowledge: The Philosophical Quest in History. London: Routledge, 2015.

Harrison, Peter. The Fall of Man and the Foundations of Science. Cambridge UK: Cambridge University Press, 2007.

Jenco, Leigh. Making the Political: Founding and Action in the Political Theory of Zhang Shizhao. Cambridge UK: Cambridge University Press, 2010.

Jenco, Leigh; Steve Fuller, David Haekwon Kim, Thaddeus Metz, and Miljana Milojevic, “Symposium: Are Certain Knowledge Frameworks More Congenial to the Aims of Cross-Cultural Philosophy?” Journal of World Philosophies 2, no. 2 (2017): 82-145.

Löwith, Karl. Meaning in History: The Theological Implications of Philosophy of History. Chicago: University of Chicago Press, 1949.

Needham, Joseph. The Grand Titration: Science and Society in East and West. London: George Allen and Unwin, 1969.

Perkins, Franklin. Leibniz and China: A Commerce of Light. Cambridge UK: Cambridge University Press, 2004.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017.

Wilson, Catherine. “Kant on Civilization, Culture and Moralization,” in Kant’s Lectures on Anthropology: A Critical Guide. Edited by A. Cohen. Cambridge UK: Cambridge University Press, 2014: 191-210.

[1] Bryan Van Norden, “Western Philosophy is Racist,” (https://aeon.co/essays/why-the-western-philosophical-canon-is-xenophobic-and-racist; last accessed on May 10, 2018).

[2] See: Leigh Jenco, Steve Fuller, David Haekwon Kim, Thaddeus Metz, and Miljana Milojevic, “Symposium: Are Certain Knowledge Frameworks More Congenial to the Aims of Cross-Cultural Philosophy?” Journal of World Philosophies 2, no. 2 (2017): 82-145 (https://scholarworks.iu.edu/iupjournals/index.php/jwp/article/view/1261/128; last accessed on May 10, 2018).

[3] Robert Frodeman, and Adam Briggle, Socrates Tenured (Lanham MD: Rowman and Littlefield, 2016).

[4] Leigh Jenco, Making the Political: Founding and Action in the Political Theory of Zhang Shizhao (Cambridge UK: Cambridge University Press, 2010).

[5] Catherine Wilson, “Kant on Civilization, Culture and Moralization,” in Kant’s Lectures on Anthropology: A Critical Guide, ed. A. Cohen (Cambridge UK: Cambridge University Press, 2014), 191-210.

[6] Randall Collins, The Sociology of Philosophies: A Global Theory of Intellectual Change (Cambridge MA: Harvard University Press, 1998).

[7] Charles Bazerman, Shaping Written Knowledge (Madison WI: University of Wisconsin Press, 1987).

[8] Peter Harrison, The Fall of Man and the Foundations of Science (Cambridge UK: Cambridge University Press, 2007).

[9] Karl Löwith, Meaning in History: The Theological Implications of Philosophy of History (Chicago: University of Chicago Press, 1949); Steve Fuller, Science: The Art of Living (Durham UK: Acumen, 2010).

[10] Steve Fuller, Knowledge: The Philosophical Quest in History (London: Routledge, 2015).

[11] Joseph Needham, The Grand Titration: Science and Society in East and West (London: George Allen and Unwin, 1969); Steve Fuller, Science: Concepts in the Social Sciences (Milton Keynes UK: Open University Press, 1997).

[12] Franklin Perkins, Leibniz and China: A Commerce of Light (Cambridge UK: Cambridge University Press, 2004).

Author Information: Alcibiades Malapi-Nelson, Humber College, alci.malapi@outlook.com

Malapi-Nelson, Alcibiades. “On a Study of Steve Fuller.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 25-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Za

Happy birthday, Steve!

Steve Fuller, seen here just under seven years ago in New York City, gave a name to what is now the sub-discipline and community of social epistemology. Like all thriving communities, it’s gotten much more diverse and creative with time. As has Steve Fuller.
Image by Babette Babich, courtesy of Steve Fuller

 

Francis Remedios and Val Dusek have written a thorough and exhaustive account of Steve Fuller’s work, ranging (mostly) from 2003 to 2017. Fuller’s earlier work was addressed in Remedios’ previous book, Legitimizing Scientific Knowledge (2003) – to which this one is the logical continuation. Back then Remedios introduced the reader to Fuller’s inaugurated field of research, “social epistemology”, encompassing the philosopher’s work from the late 1980’s until the turn of the century.

Given that Steve Fuller is one of the most prolific authors alive, having published (so far) 30 books and hundreds of articles, Remedios & Dusek’s book (as Remedios’ previous book), fill a practical need: It is hard to keep up with Fuller’s elevated rate of production. Indeed, both the seasoned reader and the neophyte to Fuller’s fairly overwhelming amount of writing, will need a panoramic and organic view of his breathtaking scope of research. Remedios & Dusek successfully accomplish the task of providing it.

The Bildung of a Person and His Concepts

Remedios & Dusek’s book starts with a Foreword by Fuller himself, followed by an Introduction (Ch. 1) by the authors. The bulk of the monograph is comprised by several chapters addressing Fuller’s ideas on Science and Technology Studies (Ch. 2), Social Epistemology (Ch. 3), the University & Interdisciplinarity (Ch. 4), Intelligent Design (Ch. 5), Cosmism & Gnosticism (Ch. 6), and the Proactionary principle (Ch. 7).

There is some connective overlap between chapters. In each one of them, Remedios & Dusek provide an articulated landscape of Fuller’s ideas, the occasional criticism, and a final summary. The book ends up with an appropriately short Conclusion (Ch. 8) and a PostScript (Ch. 9) – an interview’s transcription.

It is worth pointing out that the work is chronologically (and conveniently) in sync with Fuller’s own progressive intellectual development, and thus, the first part roughly focuses on his earlier work, whereas the second part on his later writings.[1]

The first chapter after the Introduction (Chapter 2, “Fuller on Science and Technology Studies” (STS), already provides a cue for a theme that would transfix the arc of Fuller’s thoughts spanning the last decade. As I see it, Steve Fuller is arguably going to extents that some may deem controversial (e.g., his endorsement of some type of Intelligent Design, his backing up of transhumanism, his gradual “coming out” as a Catholic) due to one main reason: A deep preoccupation with the future of humanity vis-à-vis pervasively disrupting emerging technologies.

Accordingly, Fuller wants to fuel a discussion that may eventually salvage whatever we find out that being human consists of – even if this “human” will resemble little the “humans” as we know them now. At this point, the “cue” is not self-evident: Fuller does not like Bruno Latour’s Actor-Network theory. In Fuller’s view, Latour’s framework triggers both an epistemological and an ethical problem: it diffuses human agency and by extension, responsibility – respectively. Equating human agency with the causal power attributed to the “parliament of things” ultimately reverberates in an erosion of human dignity. Here the cue becomes clearer: It is precisely this human dignity that Fuller will later defend in his attack of Darwinism.

Humanity Beyond the Human

Chapter 3, “Fuller’s Social Epistemology and Epistemic Agency”, provides a further clue to Fuller’s agenda. Remedios & Dusek coined a sentence that may constitute one of the most succinct, although fundamental, pillars in Steve Fuller’s grand framework: “For Fuller, humanity would continue if homo sapiens end”.[2] This statement ingeniously captures Fuller’s position that “humanity” (a “project” started during the Medieval Ages and developed during Modernity), is something that homo sapiens earn – or not. Biology might provide a compatible receptacle for this humanity to obtain, but it is by no means an automatic occurrence. One strives to get it – and many in fact fail to reach it.

In the context of this theme, Fuller steers away from an “object-oriented” (social) epistemology to an “agent-oriented” one: Instead of endlessly ruminating about possible theories of knowledge (which would render an accurate picture of the object – social or not), one starts to take into account the possibilities that open up after considering transforming the knowing agent itself. This transition foretells Fuller’s later view: a proactionary approach[3] to experimentation where the agent commits to the alteration of reality – as opposed to a precautionary stance, where the knower passively waits for reality’s feedback before further proceeding.

In chapter 4, “The University and Interdisciplinarity”, Remedios & Dusek treat Fuller’s views on the situation of institutions of higher education currently confronting the relentless compartmentalization of knowledge. Fuller praises Wilhelm von Humboldt’s reinvention of the notion of the university in the 19th century, where the individual would acquire a holistic formation (bildung), and which would produce in return tangible benefits to society out of the growth of knowledge in general and science in particular.

This model, which catapulted Germany to the forefront of research, and which was emulated by several Western nations, has been gradually eroded by neoliberalism. Neoliberal stances, spurred by an attention to clients’ requests, progressively severed the heretofore integral coexistence of research and teaching, creating instead pockets of specialization – along with their own idiosyncratic jargon. This fragmentation, in turn, has generated an overall ignorance among scientists and intellectuals regarding the “big picture”, which ultimately results in a stagnation of knowledge production. Fuller advocates for a return to the Humboldtian ideal, but this time incorporating technology as in integral part of the overall academic formation in the humanities.

Roles for Religion and God

Chapter 5, “Fuller’s Intelligent Design” (ID), deals with the philosopher’s controversial views regarding this position, particularly after the infamous Dover Trial. Remedios & Dusek have done a very good job at tracing the roots and influences behind Fuller’s ideas on the issue. They go all the way back to Epicurus and Hume, including the strong connection between these two and Charles Darwin, particularly in what concerns the role of “chance” in evolution. Those interested in this illuminating philosophical archeology will be well served after reading this chapter, instead of (or as a complement to) Steve Fuller’s two books on the topic.[4]

Chapter 6, “Fuller, Cosmism and Gnosticism” lays out the relationship of the philosopher with these two themes. Steve Fuller recognizes in Russian cosmism an important predecessor to transhumanism – along with the writings of the mystical Jesuit Teilhard de Chardin.

He is lately catering to a re-emergence of interest among Slavs regarding these connections, giving talks and seminars in Russia. Cosmism, a heterodox offspring of Russian Orthodoxy, aims at a reconstruction of the (lost) paradise by means of reactivation of a type of “monads” spread-out throughout the universe – particles that disperse after a person dies. Scientific progress would be essential in order to travel throughout the cosmos retrieving these primordial “atoms” of people of the past, so that they could be one day resurrected. Russia would indeed have a cosmic ordering mission. This worldview is a particular rendition of the consequences of Christ’s Resurrection, which was denounced by the Orthodox Church as heretical.

Nevertheless, it deeply influenced several Slavic thinkers, who unlike many Western philosophers, did have a hard time reconciling their (Orthodox) Christianity with reason and science. This syncretism was a welcomed way for them to “secularize” the mystical-prone Christian Orthodoxy and infuse it with scientific inquiry. As a consequence, rocket science received a major thrust for development. After all, machines had to be built in order to retrieve these human particles so that scientifically induced global resurrection occurs.

One of the more important global pioneers in rocket engines, Konstantin Tsiolkovsky (who later received approval by Joseph Stalin to further develop space travel research), was profoundly influenced by it. In fact, increasingly more scholars assert that despite the official atheism of the Soviet Union, cosmism was a major driving force behind the Soviet advances, which culminated in the successful launch of the Sputnik.

Chapter 7, “Proactionary and Precautionary Principles and Welfare State 2.0”, is the last chapter before the Conclusion. Here Remedios & Dusek deal with Fuller’s endorsement of Max More’s Proactionary Principle and the consequent modified version of a Welfare State. The proactionary approach, in contradistinction with the precautionary principle (which underpins much of science policy in Europe), advocates for a risk-taking approach, justified partly in the very nature of Modern science (experimentation without excessive red tape) and partly in what is at stake: the survival of our species. Steve Fuller further articulates the proactionary principle, having written a whole book on the subject[5] – while More wrote an article.

The Roles of This Book

Remedios & Dusek have done an excellent job in summarizing, articulating and criticizing the second half of Steve Fuller’s vast corpus – from the early 2000s until last year. I foresee a successful reception by thinkers concerned with the future of humanity and scholars interested in Fuller’s previous work. As a final note, I will share a sentiment that will surely resonate with some – particularly with the younger readers out there.

As noted in the opening remarks, Remedios & Dusek’s book fill a gap in what concerns the possibility of acquiring an articulated overview of Fuller’s thought, given his relentless rate of publication. However, the sheer quantity to keep up with is not the only issue. These days, more than “the written word” may be needed in order to properly capture the ideas of authors of Fuller’s calibre. As I observed elsewhere,[6] reading Fuller is a brilliant read – but it is not an easy read.

It may be fair to say that, as opposed to, say, the relatively easy reading of an author like Steven Pinker, Steve Fuller’s books are not destined to be best-sellers among laymen. Fuller’s well put together paragraphs are both sophisticated and precise, sometimes long, paying witness to an effort for accurately conveying his multi-layered thought processes – reminding one of some German early modern philosophers. Fortunately, there is now a solid source of clarity that sheds effective light on Fuller’s writing: his available media. There are dozens of video clips (and hundreds of audio files[7]) of his talks, freely available to anyone. It may take a while to watch and listen to them all, but it is doable. I did it. And the clarity that they bring to his writings is tangible.

If Fuller is a sophisticated writer, he certainly is a very clear (and dare I say, entertaining) speaker. His “talking” functions as a cognitive catalyst for the content of his “writing” – in that, he is returning to the Humboldtian ideal of merged research and teaching. Ideally, if one adds to these his daily tweets,[8] now we have at reach the most complete picture of what would be necessary to properly “get” a philosopher like him these days. I have the feeling that, regardless of our settled ways, this “social media” component, increasingly integrated with any serious epistemic pursuit, is here to stay.

Contact details: alci.malapi@outlook.com

References

Fuller, S. (2007). Science Vs. Religion?: Intelligent Design and the Problem of Evolution. Cambridge, UK: Polity.

Fuller, S. (2008). Dissent Over Descent: Intelligent Design’s Challenge to Darwinism. Cambridge, UK: Icon.

Fuller, S. (2014). The Proactionary Imperative: A Foundation for Transhumanism. Hampshire, UK: Palgrave Macmillan.

Malapi-Nelson, A. (2013). “Book review: Steve Fuller, Humanity 2.0: What it Means to be Human Past, Present and Future.” International Sociology Review of Books 28(2): 240-247.

Remedios, F. and Dusek, V. (2018). Knowing Humanity in the Social World: The Path of Steve Fuller’s Social Epistemology. London, UK: Palgrave Macmillan.

[1] With the exception of the PostScript, which is a transcription of an interview with Steve Fuller mostly regarding the first period of his work.

[2] Remedios & Dusek 2018, p. 34

[3] Remedios & Dusek 2018, p. 40

[4] Fuller 2007 and Fuller 2008

[5] Fuller 2014

[6] Malapi-Nelson 2013

[7] warwick.ac.uk/fac/soc/sociology/staff/sfuller/media/audio

[8] Some of which are in fact reproduced by Remedios & Dusek 2018 (e.g. p. 102).

Author Information: Steve Fuller, University of Warwick, UK, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Against Virtue and For Modernity: Rebooting the Modern Left.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 51-53.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3S9

Toby Ziegler’s “The Liberals: 3rd Version.” Photo by Matt via Flickr / Creative Commons

 

My holiday message for the coming year is a call to re-boot the modern left. When I was completing my doctoral studies, just as the Cold War was beginning to wind down, the main threat to the modern left was seen as coming largely from within. ‘Postmodernism’ was the name normally given to that threat, and it fuelled various culture, canon and science wars in the 1980s and 1990s.

Indeed, even I was – and, in some circles, continue to be – seen as just such an ‘enemy of reason’, to recall the name of Richard Dawkins’ television show in which I figured as one of the accused. However, in retrospect, postmodernism was at most a harbinger for a more serious threat, which today comes from both the ‘populist’ supporters of Trump, Brexit et al. and their equally self-righteous academic critics.

Academic commentators on Trump, Brexit and the other populist turns around the world seem unable to avoid passing moral judgement on the voters who brought about these uniformly unexpected outcomes, the vast majority of which the commentators have found unwelcomed. In this context, an unholy alliance of virtue theorists and evolutionary psychologists have thrived as diagnosticians of our predicament. I say ‘unholy’ because Aristotle and Darwin suddenly find themselves on the same side of an argument, now pitched against the minds of ‘ordinary’ people. This anti-democratic place is not one in which any self-respecting modern leftist wishes to be.

To be sure, virtue theorists and evolutionary psychologists come to the matter from rather different premises – the one metaphysical if not religious and the other naturalistic if not atheistic. Nevertheless, they both regard humanity’s prospects as fundamentally constrained by our mental makeup. This makeup reflects our collective past and may even be rooted in our animal nature. Under the circumstances, so they believe, the best we can hope is to become self-conscious of our biases and limitations in processing information so that we don’t fall prey to the base political appeals that have resulted in the current wave of populism.

These diagnosticians conspicuously offer little of the positive vision or ambition that characterised ‘progressive’ politics of both liberal and socialist persuasions in the nineteenth and twentieth centuries. But truth be told, these learned pessimists already have form. They are best seen as the culmination of a current of thought that has been percolating since the end of the Cold War effectively brought to a halt Marxism as a world-historic project of human emancipation.

In this context, the relatively upbeat message advanced by Francis Fukuyama in The End of History and the Last Man that captivated much of the 1990s was premature. Fukuyama was cautiously celebrating the triumph of liberalism over socialism in the progressivist sweepstakes. But others were plotting a different course, one in which the very terms on which the Cold War had been fought would be superseded altogether. Gone would be the days when liberals and socialists vied over who could design a political economy that would benefit the most people worldwide. In its place would be a much more precarious sense of the world order, in which overweening ambition itself turned out to be humanity’s Achilles Heel, if not Original Sin.

Here the trail of books published by Alasdair MacIntyre and his philosophical and theological admirers in the wake of After Virtue ploughed a parallel field to such avowedly secular and scientifically minded works as Peter Singer’s A Darwinian Left and Steven Pinker’s The Blank Slate. These two intellectual streams, both pointing to our species’ inveterate shortcomings, gained increasing plausibility in light of 9/11’s blindsiding on the post-Cold War neo-liberal consensus.

9/11 tore up the Cold War playbook once and for all, side-lining both the liberals and the socialists who had depended on it. Gone was the state-based politics, the strategy of mutual containment, the agreed fields of play epitomized in such phrases as ‘arms race’ and ‘space race’. In short, gone was the game-theoretic rationality of managed global conflict. Thus began the ongoing war on ‘Islamic terror’. Against this backdrop, the Iraq War proved to be colossally ill-judged, though no surprise given that its mastermind was one of the Cold War’s keenest understudies, Donald Rumsfeld.

For the virtue theorists and evolutionary psychologists, the Cold War represented as far as human rationality could go in pushing back and channelling our default irrationality, albeit in the hope of lifting humanity to a ‘higher’ level of being. Indeed, once the USSR lost the Cold War to the US on largely financial grounds, the victorious Americans had to contend with the ‘blowback’ from third parties who suffered ‘collateral damage’ at many different levels during the Cold War. After all, the Cold War, for all its success in averting nuclear confrontation, nevertheless turned the world into a playing field for elite powers. ‘First world’, ‘second world’ and ‘third world’ were basically the names of the various teams in contention on the Cold War’s global playing field.

So today we see an ideological struggle whose main players are those resentful (i.e. the ‘populists’) and those regretful (i.e. the ‘anti-populists’) of the entire Cold War dynamic. The only thing that these antagonists appear to agree on is the folly of ‘progressivist’ politics, the calling card of both modern liberalism and socialism. Indeed, both the populists and their critics are fairly characterised as somehow wanting to turn back the clock to a time when we were in closer contact with the proverbial ‘ground of being’, which of course the two sides define in rather different terms. But make no mistake of the underlying metaphysical premise: We are ultimately where we came from.

Notwithstanding the errors of thought and deed committed in their names, liberalism and socialism rightly denied this premise, which placed both of them in the vanguard – and eventually made them world-historic rivals – in modernist politics. Modernity raised humanity’s self-regard and expectations to levels that motivated people to build a literal Heaven on Earth, in which technology would replace theology as the master science of our being. David Noble cast a characteristically informed but jaundiced eye at this proposition in his 1997 book, The Religion of Technology: The Divinity of Man and the Spirit of Invention. Interestingly, John Passmore had covered much the same terrain just as eruditely but with greater equanimity in his 1970 book, The Perfectibility of Man. That the one was written after and the other during the Cold War is probably no accident.

I am mainly interested in resurrecting the modernist project in its spirit, not its letter. Many of modernity’s original terms of engagement are clearly no longer tenable. But I do believe that Silicon Valley is comparable to Manchester two centuries ago, namely, a crucible of a radical liberal sensibility – call it ‘Liberalism 2.0’ or simply ‘Alt-Liberalism’ – that tries to use the ascendant technological wave to leverage a new conception of the human being.

However one judges Marx’s critique of liberalism’s scientific expression (aka classical political economy), the bottom line is that his arguments for socialism would never have got off the ground had liberalism not laid the groundwork for him. As we enter 2018 and seek guidance for launching a new progressivism, we would do well to keep this historical precedent in mind.

Contact details: S.W.Fuller@warwick.ac.uk

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Veritism as Fake Philosophy: Reply to Baker and Oreskes.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 47-51.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3M3

Image credit: elycefeliz, via flickr

John Stuart Mill and Karl Popper would be surprised to learn from Baker and Oreskes (2017) that freedom is a ‘non-cognitive’ value. Insofar as freedom—both the freedom to assert and the freedom to deny—is a necessary feature of any genuine process of inquiry, one might have thought that it was one of the foundational values of knowledge. But of course, Baker and Oreskes are using ‘cognitive’ in a more technical sense, one introduced by the logical positivists and remains largely intact in contemporary analytic epistemology and philosophy of science. It was also prevalent in post-war history and sociology of science prior to the rise of STS. This conception of the ‘cognitive’ trades on a clear distinction between what lies ‘inside’ and ‘outside’ a conceptual framework—in this case, the conceptual framework of science. But there’s a sting in the tail.

An Epistemic Game

Baker and Oreskes don’t seem to realize that this very conception of the ‘cognitive’ is in the post-truth mould that I defend. After all, for the positivists, ‘truth’ is a second order concept that lacks any determinate meaning except relative to the language in terms of which knowledge claims can be expressed. It was in this spirit that Rudolf Carnap thought that Thomas Kuhn’s ‘paradigm’ had put pragmatic flesh on the positivists’ logical bones (Reisch 1991). (It is worth emphasizing that Carnap passed this judgement before Kuhn’s fans turned him into the torchbearer for ‘post-positivist’ philosophy of science.) At the same time, this orientation led the positivists to promote—and try to construct—a universal language of science into which all knowledge claims could be translated and evaluated.

All of this shows that the positivists weren’t ‘veritists’ because, unlike Baker and Oreskes, they didn’t presuppose the existence of some univocal understanding of truth that all sincere inquirers will ultimately reach. Rather, truth is just a general property of the language that one decides to use—or the game one decides to play. In that case ‘truth’ corresponds to satisfying ‘truth conditions’ as specified by the rules of a given language, just as ‘goal’ corresponds to satisfying the rules of play in a given game.

To be sure, the positivists complicated matters because they also took seriously that science aspires to command universal assent for its knowledge claims, in which case science’s language needs to be set up in a way that enables everyone to transact their knowledge claims inside it; hence, the need to ‘reduce’ such claims to their calculable and measurable components. This effectively put the positivists in partial opposition to all the existing sciences of their day, each with its own parochial framework governed by the rules of its distinctive language game. The need to overcome this tendency explains the project of an ‘International Encyclopedia of Unified Science’.

In short, logical positivism was about designing an epistemic game—which they called ‘science’—that anyone could play and potentially win.

Given some of the things that Baker and Oreskes impute to me, they may be surprised to learn that I actually think that the logical positivists—as well as Mill and Popper—were on the right track. Indeed, I have always believed this. But these views have nothing to do with ‘veritism’, which I continue to put in scare quotes because, in the spirit of our times, it’s a bit of ‘fake philosophy’. It may work to shore up philosophical authority in public but fails to capture the conflicting definitions and criteria that philosophers themselves have offered not only for ‘truth’ but also for such related terms as ‘evidence’ and ‘validation’. All of these key epistemological terms are essentially contested concepts within philosophy. It is not simply that philosophers disagree on what is, say, ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’. (I summarize the issue here.)

Philosophical Fakeness

Richard Rorty became such a hate figure among analytic philosophers because he called out the ‘veritists’ on their fakeness. Yes, philosophers can tell you what truth is, but just as long as you accept a lot of contentious assumptions—and hope those capable of contending those assumptions aren’t in the room when you’re speaking!  Put another way, Rorty refused to adopt a ‘double truth’ doctrine for philosophy, whereby amongst themselves philosophers adopt a semi-detached attitude towards various conflicting conceptions of truth while at the same time presenting a united front to non-philosophers, lest these masses start to believe some disreputable things.

The philosophical ‘fakeness’ of veritism is exemplified in the following sentence, which appears in Baker and Oreskes’ (2017, 69) latest response:

On the contrary, truth (along with evidence, facts, and other words science studies scholars tend to relegate to scare quotes) is a far more plausible choice for one of a potential plurality of regulative ideals for an enterprise that, after all, does have an obviously cognitive function.

The sentence prima facie commits the category mistake of presuming that ‘truth’ is one more—albeit preferred—possible regulative ideal of science alongside, say, instrumental effectiveness, cultural appropriateness, etc. However, ‘truth’ in the logical positivist sense is a feature of all regulative ideals of science, each of which should be understood as specifying a language game that is governed by its own validation procedures—the rules of the game, if you will—in terms of which one theory is determined (or ‘verified’) to be, say, more effective than another or more appropriate than another.

Notice I said ‘prima facie.’ My guess is that when Baker and Oreskes say ‘truth’ is a regulative ideal of science, they are simply referring to a social arrangement whereby the self-organizing scientific community is the final arbiter on all knowledge claims accepted by society at large. As they point out, the scientific community can get things wrong—but things become wrong only when the scientific community says so, and they become fixed only when the scientific community says so. In short, under the guise of ‘truth’, Baker and Oreskes are advocating what I have called ‘cognitive authoritarianism’ (Fuller 1988, chapter 12).

Before ending with a brief discussion of what I think may be true about ‘veritism’, it is difficult not to notice the moralism associated with Baker and Oreskes’ invocation of ‘truth’. This carries over to such other pseudo-epistemic concepts as ‘trust’ and ‘reliability’, which are seen as marks of the scientific character, whereby ‘scientific’ attaches both to a body of knowledge and the people who produce that knowledge. I say ‘pseudo’ because there is no agreed measure of these qualities.

Regarding Trust

‘Trust’ is a quality whose presence is felt mainly as a double absence, namely, a studied refusal to examine knowledge claims for oneself which is subsequently judged to have had non-negative consequences.  (I have called trust a ‘phlogistemic’ concept for this reason, as it resembles the pseudo-element phlogiston, Fuller 1996). Indeed, in opposition to this general sensibility, I have gone so far as to argue that universities should be in the business of ‘epistemic trust-busting’. Here is my original assertion:

In short, universities function as knowledge trust-busters whose own corporate capacities of “creative destruction” prevent new knowledge from turning into intellectual property (Fuller 2002, 47; italics in original).

By ‘corporate capacities’, I meant the various means at the university’s disposal to ensure that the people in a position to take forward new knowledge are not simply part of the class of those who created it in the first place. More concretely, of course I have in mind ordinary teaching that aims to express even the most sophisticated concepts in terms ordinary students can understand and use. But also I mean to include ‘affirmative action’ policies that are specifically designed to incorporate a broader range of people than might otherwise attend the university. Taken together, these counteract the ‘neo-feudalism’ to which academic knowledge production is prone—‘rent-seeking’, if you will—which Baker and Oreskes appear unable to recognize.

As for ‘reliability’, it is a term whose meaning depends on specifying the conditions—say, in the design of an experiment—under which a pattern of behaviour is expected to occur. Outside of such tightly defined conditions, which is where most ‘scientific controversies’ happen, it is not clear how cases should be classified and counted, and hence what ‘reliable’ means. Indeed, STS has not only drawn attention to this fact but it has gone further—say, in the work of Harry Collins—to question whether even lab-based reliability is possible without some sort of collusion between researchers. In other words, the social accomplishment of ‘reliable knowledge’ is at least partly an expression of solidarity among members of the scientific community—a closing of the ranks, to put it less charitably.

An especially good example of the foregoing is what has been dubbed ‘Climategate’, which involved the releasing of e-mails from the UK’s main climate science research group in response to a journalist’s Freedom of Information request. While no wrongdoing was formally established, the e-mails did reveal the extent to which scientists from across the world effectively conspired to present the data for climate change in ways that obscured interpretive ambiguities, thereby pre-empting possible appropriations by so-called ‘climate change sceptics’. To be sure, from the symmetrical normative stance of classic STS, Climategate simply reveals the micro-processes by which a scientific consensus is normally and literally ‘manufactured’. Nevertheless, I doubt that Baker and Oreskes would turn to Climategate as their paradigm case of a ‘scientific consensus’. But why not?

The reason is that they refuse to acknowledge the labour that is involved in securing collective assent over any significant knowledge claim. As I observed in my original response (2017) to Baker and Oreskes, one might be forgiven for concluding from reading the likes of Merton, Habermas and others who see consensus formation as essential to science that an analogue of the ‘invisible hand’ is at play. On their telling, informed people draw the same conclusions from the same evidence. The actual social interaction of the scientists carries little cognitive weight in its own right. Instead it simply reinforces what any rational individual is capable of inferring for him- or herself in the same situation. At most, other people provide additional data points but they don’t alter the rules of right reasoning. Ironically, considering Baker and Oreskes’ allergic reaction to any talk of science as a market, this image of Homo scientificus to which they attach themselves seems rather like what they don’t like about Homo oeconomicus.

Climbing the Mountain

The contrasting view of consensus formation, which I uphold, is more explicitly ‘rhetorical’. It appeals to a mix of strategic and epistemic considerations in a setting where the actual interaction between the parties sets the parameters that defines the scope of any possible consensus. Although Kuhn also valorized consensus as the glue that holds together normal science puzzle-solving, to his credit he clearly saw its rhetorical and even coercive character, from pedagogy to peer review. For this reason, Kuhn is the one who STSers still usually cite as a precursor on this matter. Unlike Baker and Oreskes, he didn’t resort to the fake philosophy of ‘veritism’ to cover up the fact that truth is ultimately a social achievement.

Finally, I suggested that there may be a way of redeeming ‘veritism’ from its current status of fake philosophy. Just because ‘truth’ is what W.B. Gallie originally called an ‘essentially contested concept’, it doesn’t follow that it is a mere chimera. But how to resolve truth’s palpable diversity of conceptions into a unified vision of reality? The clue to redemption is provided by Charles Sanders Peirce, whose idea of truth as the final scientific consensus informs Baker and Oreskes’ normative orientation. Peirce equated truth with the ultimate theory of everything, which amounts to putting everything in its place, thereby resolving all the internal disagreements of perception and understanding that are a normal feature of any active inquiry. It’s the moment when the blind men in the Hindu adage discover the elephant they’ve been groping and (Popper’s metaphor) the climbers coming from different directions reach the same mountain top.[1]

Peirce’s vision was informed by his understanding of John Duns Scotus, the early fourteenth scholastic who provided a deep metaphysical understanding of Augustine’s Platonic reading of the Biblical Fall of humanity. Our ‘fallen’ state consists in the dismemberment of our divine nature, something that is regularly on display in the variability of humans with regard to the virtues, all of which God displays to their greatest extent. For example, the most knowledgeable humans are not necessarily the most benevolent. The journey back to God is basically one of putting these pieces—the virtues—back together again into a coherent whole.

At the level of organized inquiry, we find a similar fragmentation of effort, as the language game of each science exaggerates certain modes of access to reality at the expense of others. To be sure, Kuhn and STS accept, if not outright valorise, disciplinary specialisation as a mark of the increasing ‘complexification’ of the knowledge system. Not surprisingly, perhaps, they also downplay the significance in the sort of capital ‘T’ sense of ‘truth’ that Baker and Oreskes valorise. One obvious solution would be for defenders of ‘veritism’ to embrace an updated version of the ‘unified science’ project championed by the logical positivists, which aimed to integrate all forms of knowledge in terms of some common currency of intellectual exchange. (My earlier comments against ‘neo-feudal’ tendencies in academia should be seen in this light.) This would be the analogue of the original theological project of humanity reconstituting its divine nature, which Peirce secularised as the consensus theory of truth. Further considerations along these lines may be found here.

References

Baker, Erik and Naomi Oreskes. ‘Science as a Game, Marketplace or Both: A Reply to Steve Fuller.’ Social Epistemology Review and Reply Collective 6, no. 9 (2017): 65-69.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. ‘Recent Work in Social Epistemology.’ American Philosophical Quarterly 33: 149-66, 1996.

Fuller, Steve. Knowledge Management Foundations. Woburn MA: Butterworth-Heinemann, 2002.

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

Reisch, George A. ‘Did Kuhn Kill Logical Positivism?’ Philosophy of Science 58, no. 2 (1991): 264-277.

[1] One might also add the French word for ‘groping’, tâtonnement, common to Turgot’s and Walras’ understanding of how ‘general equilibrium’ is reached in the economy, as well as Theilard de Chardin’s conception of how God comes to be fully realized in the cosmos.

Author Information: Erik Baker and Naomi Oreskes, Harvard University, ebaker@g.harvard.edu, oreskes@fas.harvard.edu

Baker, Erik and Naomi Oreskes. “Science as a Game, Marketplace or Both: A Reply to Steve Fuller.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 65-69.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Ks

Image credit: United Nations Photo, via flickr

Steve Fuller’s response to our criticism of the “game” analogy in science studies comes at an opportune time.[1] One of us has recently published an exhaustive review of decades of ExxonMobil’s climate change communications, finding that while the vast majority of the oil company’s internal documents acknowledged the reality of anthropogenic climate change, only a vanishing minority of its public-facing statements expressed the same position, instead sowing doubt about the same scientific consensus its own in-house scientists overwhelmingly accepted.[2] This case study provides a helpful illustration of why we continue to defend our initial position, despite criticism from Fuller in two principal areas: truth and consensus, and political economy.

Truth and Consensus

Fuller describes our veritism (our insistence on talking about truth outside of scare quotes) as “gratuitous.” This complaint is hardly novel, and was expressed perhaps most influentially by Richard Rorty.[3] The basic idea, in all of its veneers, is that talk of truth furnishes philosophers and social scholars of science with no additional explanatory powers. “Truth” is instead a pointless metaphysical tack-on to an otherwise robust descriptive enterprise.

ExxonMobil’s sordid climate history provides a compelling counterexample to this assertion. Any answer to the question of why ExxonMobil continued to accept internally the same scientific claims it was disputing publicly (and that it had an obvious incentive to dispute) that does not invoke truth—or at least related notions as evidence and empirical adequacy—will be convoluted and tendentious. The best explanation of this fact is simply that the scientific consensus on climate change is largely correct, which is to say true.[4] It was in ExxonMobil’s interest both to understand the truth and to deny it publicly. If, as Fuller maintains, truth-seeking is wholly extraneous to the scientific enterprise, it is almost impossible to understand why ExxonMobil’s own scientists would perform research and publish papers antithetical to the company’s political and financial interests.

Veritism also helps to explain two broader features of scientific consensus that Fuller emphasizes. First, its formation in a social process. Fuller thinks that he has caught us in a contradiction when he observes us talking about “building” consensus. Hardly. On the contrary, it is difficult to understand the (social) process of consensus-building in science without a sense of truth-seeking as a constitutive feature. If scientists did not orient themselves in relation to a commonly accessible physical and social world about which the truth can, at least to some degree, be known, why would they put so much effort into persuading their colleagues and trying to achieve consensus? Why would they even consider such a thing possible? Indeed, what would the project of science be?

Non-cognitive goals do not bear the same explanatory weight. As the history of climate change denial illustrates, taking consensus and consensus-formation seriously is not a prerequisite for scientists to attain fame and fortune (and even credibility, in some circles). For an example of the kinds of practices that result when communities do not regard truth-seeking as feasible in a given realm, one only has to consider the common American proscription of politics and religion as conversation topics at “mixed company” dinner parties.

Veritism also helps to explain why scientific consensus occasionally comes undone. Fuller clearly believes that “the life expectancy of the theories around which scientists congregate at any given time” is quite low. (Here we wonder about the nature of this assertion: Does Fuller, perhaps, think it is true? If so, why is truth-seeking constitutive of certain social-scientific disciplines like STS, but not the natural sciences? One marvels at the conviction of some scholars in science studies that claims to speaking “truth to power” are illegitimate unless they are the ones making them.) We think that the evidence is more equivocal.[5]

Yet even granting Fuller’s claims—and acknowledging that non-cognitive social forces can obstruct consensus formation or cause a consensus to come undone—it is hard to fathom why new evidence should ever cause consensus to shift—and even harder to criticize an existing consensus—while banishing all talk of evidence, accuracy, correctness, and the notion that a conclusion can be shown to be true? Why would Earth scientists in the 1960s have bothered to re-open debate about continental drift? Fuller points out that evolutionary biologists have recently started to rethink some elements of the consensus around the twentieth-century modern synthesis, with some even calling for a new “extended evolutionary synthesis.” He clearly regards this development as salutary. But reference to evidence, facts, and truth—but often explicitly not intelligent design, it’s worth emphasizing—is at the core of the claims these scientists have made in promulgating and winning over some support for their theories.[6] If Fuller is right about science in general, he must, on pain of contradiction, find these same scientists whose work he welcomes to be under the grip of a profound and disturbing delusion.

The force of these two considerations together is why we do not and have never (contrary to what Fuller implies) held up consensus as a definitional criterion of truth, but rather as one of many possible heuristics to guide rational assessment (especially among non-experts) of the state of the science on a particular issue.[7] Other such heuristics include the existence of multiple methodological or disciplinary lines of evidence for the same conclusion. Or interested parties internally accepting the same scientific claims they publicly claim to doubt.

We think that developing grounds for such external assessment is crucial precisely because, as historians, we are acutely aware of the perishability of truth claims. How should we understand scientific knowledge as a basis for action and decision-making in light of this perishability? If parents only put their own children at risk by eschewing vaccination; if there were credible scientific evidence that vaccinations did cause autism; or if climate change were reversible, we might argue that deciding about these matters should be left to individuals. But none of these if conditions obtain. Intellectual positions that refuse to discriminate among these claims—or to discriminate only on social but not on cognitive grounds—put people at risk of real harm.

Do scientists have all the answers? Of course not. Should we have blind faith in science? Obviously not. Is the presence of expert consensus proof of truth? No again. But when scientists have come to agreement on a complicated matter like AIDS or evolution or climate change, it does indicate that they think that they have obtained some measure of truth about the issue, even if incomplete and subject to future revision. No climate scientist would claim that we know everything we could or should or might want to know about the climate system, but she would claim that we know enough to understand that if we don’t prevent further increases in atmospheric greenhouse gases, a lot of land will be lost and people will suffer. Consensus is a useful category of analysis because it tells us that scientific experts think that they have settled a matter, and that has to count for something. We are not arguing for a return to a naïve correspondence theory of truth—that would hardly be defensible given the past fifty years of work in philosophy of science—must less a naïve assumption that scientific experts are always right. But we are arguing for the need for a more vigorous re-inclusion of the cognitive dimensions of science in STS—including some notions of evidence, empirical adequacy, epistemic acceptability,[8] and truth without scare quotes.

Political Economy

The exigency of these considerations becomes even clearer in light of the concerns about economic and political power that we raised in our previous article. It is gratifying to see Fuller affirm the connection between the “game” view of science and neoliberal political economy for which we argued there. We hope that our colleagues who are sympathetic to Fuller’s epistemology but not his politics will attempt to identify where they think he has gone wrong in perceiving a relationship between the two.

Nonetheless, the case of ExxonMobil and climate change exemplifies the issue we take with Fuller’s assessment of the liberatory potential of the “free market thinkers” he extolls. Fuller rejects the idea of justice-motivated market interventions (such as a carbon tax, as we emphasized in our previous article) as obscuring the “real price” and its mysterious “educative function,” and he thinks that our defense of the scientific consensus on climate change places us in thrall to the “status quo.” But it is Fuller’s supposedly alternative “normative agenda” that supports the status quo, offering in practice a defense of a multi-billion dollar corporation, whose long-time CEO is now a cabinet member loyally serving one of the most reactionary presidents in United States history. This is precisely the bizarre situation that we described in our previous article: “STS, which often sees itself as championing the subaltern, has now in many cases become the intellectual defender of those who would crush the aspirations of ordinary people.”

Fuller characterizes our position as “neo-feudal,” (whatever that might mean) but it strains credulity to think that his position, capable of mustering little more than an apathetic shrug in the face of—for instance—the manipulation of science by oil money is really the one that stands up best to anti-democratic accretions of power. As we emphasized earlier, such inequalities—in income and wealth, and the political inequalities that subsequently ensue—are characteristic of capitalist economies,[9] and so it is perhaps unsurprising that the most loyal defenders of capitalism have not denied that fact but rather embraced and justified it. From Ludwig von Mises’ 1927 judgment that fascism was at one point a necessary evil to combat communism,[10] to the material and intellectual support of Wilhelm Röpke (the most influential of the “ordoliberals” that Fuller especially praises) for the South African apartheid regime,[11] to Robert Nozick’s influential right-libertarian condemnation of wealth redistribution and democracy alike in his Anarchy, State, and Utopia (1974),[12] to twenty-first-century attacks on democracy from Austrian economists at institutions like the Mercatus Center at George Mason University and the Ludwig von Mises Institute in Alabama,[13] the “freedom” that the neoliberals—and now Fuller—prize so dearly has typically meant the freedom of the few to oppress the many, or at least to place their needs and concerns above all others.

At least Fuller, with his modified ordoliberalism, seems to agree with us that some “normative agenda” must indeed be brought to bear in both economics and science. But two things are worth noting. First, what is such a normative agenda if not one of the “transcendent conceptions of truth and value” that Austrian wisdom is supposed to debunk? After all, the Bloorian analogy to which we initially drew attention was not just about “social constructivism” in general but specifically about Wittgenstein. And we read earlier in Fuller’s response his assessment of the Wittgensteinian “ordinary language” thinkers: they are “advertised as democratising but in practice they are parochialising.” Indeed. But with his later full-throated embrace of Bloor-cum-Mises, it looks awfully like he is trying to have his Wittgenstein and mock it too.

Second, it is odd to think that if a normative agenda is to be brought to bear on science, it ought to be of an utterly non-cognitive order, like neoliberal “freedom.” On the contrary, truth (along with evidence, facts, and other words science studies scholars tend to relegate to scare quotes) is a far more plausible choice for one of a potential plurality of regulative ideals for an enterprise that, after all, does have an obviously cognitive function. Ironically, Fuller’s insistence that freedom matters for science but truth does not reeks of the rigorous discrimination between the normative and the empirical that much of the best work in science studies has undermined. Both are necessary: besides the issue of de facto alignment with status quo power, once more we see in Fuller’s response how the adoption of the “game” view vitiates the critiques of its proponents even on their own terms. Fuller, despite his obvious sympathies, still refuses to say unequivocally that mainstream scientists should surrender to the superior arguments of their intelligent design opponents. He instead rests assured that the invisible hand of a well-constructed scientific marketplace will eventually accomplish the shift in opinion he wishes to see.

We invite Fuller to join us in abandoning the game or marketplace view of science and talking openly about truth. He will find it possible to criticize the “Darwinists” much more vociferously that way. But, of course, he would then run the risk of actually being wrong, instead of merely incoherent.

[1] Erik Baker and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10; Steve Fuller, “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

[2] Geoffrey Supran and Naomi Oreskes, “Assessing ExxonMobil’s climate change communications (1977–2014),” Environmental Research Letters 12, no. 8 (2017).

[3] Richard Rorty, Contingency, Irony, and Solidarity (Cambridge University Press, 1989).

[4] Or that it conforms to the real (objective) world, to once again employ Helen Longino’s account of truth in her The Fate of Knowledge (Princeton University Press, 2001).

[5] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Faye Flam, “Why Scientific Consensus Is Worth Taking Seriously,” Bloomberg, May 22, 2017.

[6] See for instance Massimo Pigliucci, Evolution: The Extended Synthesis (MIT Press, 2010).

[7] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Naomi Oreskes, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?” in Joseph F. C. DiMento and Pamela Doughman, eds., Climate Change: What It Means for Us, Our Children, and Our Grandchildren (MIT Press, 2007), pp. 65-99. This is where we depart from some scholars associated with pragmatism and Habermas. Readers will note that, once more contrary to Fuller’s implication, these scholars comprise only one of the many diverse and sometimes internally disputatious traditions we cited as inspiration in our earlier article.

[8] As suggested by Longino, Fate of Knowledge, 2001.

[9] The now-canonical study on this question is Thomas Piketty, Capital in the Twenty-First Century (Harvard/Belknap, 2013).

[10] Since the passage is controversial, we provide it in full and let the reader judge for themselves: “It cannot be denied that fascism and all similar efforts at dictatorship are full of the best intentions and that their intervention has, for the moment, rescued European civilization. The merit that fascism has thereby acquired for itself will go on living in history eternally. But the political program that has brought salvation in this moment is not of the sort whose sustained maintenance could promise success. Fascism was a makeshift of the moment; to consider it anything more would be a disastrous mistake.” Ludwig von Mises, Liberalismus, 1927 (translation E.B.).

[11] Quinn Slobodian, “The World Economy and the Color Line: Wilhelm Röpke, Apartheid, and the White Atlantic,” GHI Bulletin Supplement 10 (2014).

[12] Robert Nozick, Anarchy, State, and Utopia (Basic Books, 1974), especially chapters 8 and 9. Nozick himself retreated somewhat on both positions later in his life (in his The Examined Life, Simon and Schuster, 1990, ch. 25), but current Mont Pelerin Society president Peter Boettke still preaches ASU as exemplary of the Austrian tradition (https://goo.gl/8nqqPo).

[13] See for instance Bryan Caplan, Myth of the Rational Voter (Princeton University Press, 2007); Hans-Hermann Hoppe, Democracy: The God That Failed (Ludwig von Mises Institute, 2001); for a secondary-source account see Nancy MacLean, Democracy in Chains (Viking, 2017).

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “How to Study: Roam, Record and Rehearse.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 62-64.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Kf

Please refer to:

Image credit: Jeffrey Smith, via flickr

My most successful study skill is one that I picked up very early in life—and perhaps is difficult to adopt after a certain age. Evidence of its success is that virtually everything I read appears to be hyperlinked to something in my memory. In practice, this means that I can randomly pick up a book and within fifteen minutes I can say something interesting about it—that is, more than summarize its contents. In this way, I make the book ‘my own’ in the sense of assigning it a place in my cognitive repertoire, to which I can then refer in the future.

There are three features to this skill. One is sheer exposure to many books. Another is taking notes on them. A third is integrating the notes into one’s mode of being, so that they function as a script in search of a performance. In sum, I give you the new 3 Rs: Roam, Record and Rehearse.

Roam

Let’s start with Roam. I’ve always understood reading as the most efficient means to manufacture equipment for the conduct of life. It is clearly more efficient than acquiring personal experience. But that’s a relatively superficial take on the situation. A better way of putting it is that reading should be seen as itself a form of personal experience. In the first instance, this means taking seriously the practice of browsing. By ‘browsing’ I mean forcing yourself to encounter a broader range of possibilities than you imagined was necessary for your reading purposes.

Those under the age of twenty may not appreciate that people used to have to occupy a dedicated physical space—somewhere in a bookshop or a library—to engage in ‘browsing’. It was an activity which forced encounters of works both ‘relevant’ and ‘irrelevant’ to one’s interests. Ideally, at least in terms of one’s own personal intellectual development, browsing would challenge the neatness of this distinction, as one came across books that turned out to be more illuminating than expected. To be sure, ‘browsing’ via computerized search engines still allow for that element of serendipity, as anyone experienced with Google or Amazon will know. Nevertheless, browser designers normally treat such a feature to be a flaw in the programme that should be remedied in the next iteration, so that you end up finding more items like the ones you previous searched for.

As a teenager in New York City in the 1970s I spent my Sunday afternoons browsing through the two biggest used bookshops in Greenwich Village, Strand and Barnes & Noble. Generally speaking, these bookshops were organized according to broad topics, somewhat like a library. However, certain sections were also organized according to book publishers, which was very illuminating. In this way, I learned, so to speak, ‘to judge a book by its cover’.  Publishing houses tend to have distinctive styles that attract specific sorts of authors. In this way, I was alerted to differences between ‘left’ and ‘right’ in politics, as well as ‘high’ and ‘low’ in culture. Taken together, these differences offer dimensions for mapping knowledge in ways that cut across academic disciplinary boundaries.

There is a more general lesson here: If you spend a lot of time browsing, you tend to distrust the standard ways in which books—or information, more generally—is categorized.

Record

Back in New York I would buy about five used books at a time and read them immediately, annotating the margins of the pages. However, I quickly realized that this was not an effective way of ‘making the books my own’. So I shifted to keeping notebooks, in which I quite deliberately filtered what I read into something I found meaningful and to which I could return later. Invariably this practice led me to acquire idiosyncratic memories of whatever I read, since I was basically rewriting the books I read for my own purposes.

In my university days, I learned to call what I was doing ‘strong reading’. And I continue it to this day. Thus, in my academic writing, when I make formal reference to other works, I am usually acknowledging an inspiration—not citing an authority—for whatever claim I happen to be making. My aim is to take personal responsibility for what I say. I dislike the academic tendency to obscure the author’s voice in a flurry of scholarly references which simply repeat connections that could be made by a fairly standard Google search of the topic under discussion.

Rehearse

Now let’s move from Record to Rehearse. In a sense, rehearsal already begins when you shift from writing marginalia to full-blown notebook entries insofar as the latter forces you to reinvent what it is that you originally found compelling in the noteworthy text. Admittedly the cut-and-paste function in today’s computerized word processing programmes can undermine this practice, resulting in ‘notes’ that look more like marginal comments.

However, I engage in rehearsal even with texts of which I am the original author. You can keep yourself in a rehearsal mode by working on several pieces of writing (or creative projects) at once without bringing any of them to completion. In particular, you should stop working just when you are about to reach a climax in your train of thought. The next time you resume work you will then be forced to recreate the process that led you to that climactic point. Often you will discover that the one conclusion toward which you thought you had been heading turns out to have been a mirage. In fact, your ‘climax’ opens up a new chapter with multiple possibilities ahead.

Assuaging Alienation

I realize that some people will instinctively resist what I just prescribed. It seems to imply that no work should ever end, which is a nightmare for anyone who needs to produce something to a specific schedule in order to earn living!  And of course, I myself have authored more than twenty books. However, to my mind these works always end arbitrarily and even abruptly. (And my critics notice this!) Nevertheless, precisely because I do not see them as ‘finished’, they continue to live in my own mind as something to which I can always return. They become part of the repertoire that I always rehearse, which in turn defines the sort of person I am.

Perhaps a good way to see what I am recommending is as a solution to the problem of ‘alienation’ which Karl Marx famously identified. Alienation arises because industrial workers in capitalist regimes have no control over the products of their labour. Once the work is done, it is sold to people with whom they have no contact and over whom they have no control. However, alienation extends to intellectual life as well, as both journalists and academics need to write quite specific self-contained pieces that are targeted at clearly defined audiences. Under the circumstances, there is a tendency to write in a way that enables the author to detach him- or her- self from, if not outright forget, what they have written once it is published. Often this tendency is positively spun by saying that a piece of writing makes its point better than its author could ever do in person.

My own view is quite the opposite. You should treat the texts you write more like dramatic scripts or musical scores than like artworks. They should be designed to be performed in many different ways, not least by the original composer. There should always be an element of incompleteness that requires someone to bring the text alive. In short, it should always be in need of rehearsal. Taken together, Roam, Record and Rehearse has been a life strategy which has enabled me to integrate a wide range of influences into a dynamic source of inspiration and creativity that I understand to be very much my own.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3JC

Please refer to:

Image credit: PGuiri, via flickr

What follows is an omnibus reply to various pieces that have been recently written in response to Fuller (2017), where I endorsed the post-truth idea of science as a game—an idea that I take to have been a core tenet of science and technology studies (STS) from its inception. The article is organized along conceptual lines, taking on Phillips (2017), Sismondo (2017) and Baker and Oreskes (2017) in roughly that order, which in turn corresponds to the degree of sympathy (from more to less) that the authors have with my thesis.

What It Means to Take Games Seriously

Amanda Phillips (2017) has written a piece that attempts to engage with the issues I raised when I encouraged STS to own the post-truth condition, which I take to imply that science in some deep sense is a ‘game’. What she writes is interesting but a bit odd, since in the end she basically proposes STS’s current modus operandi as if it were a new idea.  But we’ve already seen Phillips’ future, and it doesn’t work. But she’s far from alone, as we shall see.

On the game metaphor itself, some things need to be said. First of all, I take it that Phillips largely agrees with me that the game metaphor is appropriate to science as it is actually conducted. Her disagreement is mainly with my apparent recommendation that STS follow suit. She raises the introduction of the mortar kick into US football, which stays within the rules but threatens player safety. This leads her to conclude that the mortar kick debases/jeopardizes the spirit of the game. I may well agree with her on this point, which she wishes to present as akin to a normative stance appropriate to STS.  However, I cannot tell for sure, just given the evidence she provides. I’d also like to see whether she would have disallowed past innovations that changed the play of the game—and, if so, which ones. In other words, I need a clearer sense of what she takes to be the ‘spirit of the game’, which involves inter alia judgements about tolerable risks over a period of time.

To be sure, judicial decisions normally have this character. Sometimes judges issue ‘landmark decisions’ which may invalidate previous judges’ rulings but, in any case, set a precedent on the basis of which future decisions should be made. Bringing it back to the case at hand, Phillips might say that football has been violating its spirit for a long time and that not only should the mortar kick be prohibited but so too some other earlier innovations. (In US Constitutional law, this would be like the history of judicial interpretation of citizen rights following the passage of the Fourteenth Amendment, at least starting with Brown v. Board Education.) Of course, Phillips might instead give a more limited ruling that simply claims that the mortar kick is a step too far in the evolution of the game, which so far has stayed within its spirit. Or, she might simply judge the mortar kick to be within the spirit of the game, full stop. The arguments used to justify any of these decisions would be an exercise in elucidating what the ‘spirit of the game’ means.

I do not wish to be persnickety but to raise a point about what it means to think about science as a game. It means, at the very least, that science is prima facie an autonomous activity in the sense of having clear boundaries. Just as one knows when one is playing or not playing football, one knows when one is or is not doing science.  Of course, the impact that has on the rest of society is an open question. For example, once dedicated schools and degree programmes were developed to train people in ‘science’ (and here I mean the term in its academically broadest sense, Wissenschaft), especially once they acquired the backing and funding of nation-states, science became the source of ultimate epistemic authority in virtually all policy arenas. This was something that really only began to happen in earnest in the second half of the nineteenth century.

Similarly, one could imagine a future history of football, perhaps inspired by the modern Olympics, in which larger political units acquire an interest in developing the game as a way of resolving their own standing problems that might otherwise be handled with violence, sometimes on a mass scale. In effect, the Olympics would be a regularly scheduled, sublimated version of a world war. In that possible world, football—as one of the represented sports—would come to perform the functions for which armed conflict is now used. Here sports might take inspiration from the various science ‘races’ in which the Cold War was conducted—notably the race to the Moon—was a highly successful version of this strategy in real life, as it did manage to avert a global nuclear war. Its intellectual residue is something that we still call ‘game theory’.

But Phillips’ own argument doesn’t plumb the depths of the game metaphor in this way. Instead she has recourse to something she calls, inspired by Latour (2004), a ‘collective multiplicity of critical thought’. She also claims that STS hasn’t followed Latour on this point. As a matter of fact, STS has followed Latour almost religiously on this point, which has resulted in a diffusion of critical impact. The field basically amplifies consensus where it exists, showing how it has been maintained, and amplifies dissent where it exists, similarly showing how it has been maintained. In short, STS is simply the empirical shadow of the fields it studies. That’s really all that Latour ever meant by ‘following the actors’.

People forget that this is a man who follows Michel Serres in seeing the parasite as a role model for life (Serres and Latour 1995; cf. Fuller 2000: chap. 7). If STS seems ‘critical’, that’s only an unintended consequence of the many policy issues involving science and technology which remain genuinely unresolved. STS adds nothing to settle the normative standing of these matters. It simply elaborates them and in the process perhaps reminds people of what they might otherwise wish to forget or sideline. It is not a worthless activity but to accord it ‘critical’ in any meaningful sense would be to do it too much justice, as Latour (2004) himself realizes.

Have STSers Always Been Cheese-Eating Surrender Monkeys?

Notwithstanding the French accent and the Inspector Clouseau demeanour, Latour’s modus operandi is reminiscent of ordinary language philosophy, that intellectual residue of British imperialism, which in the mid-twentieth century led many intelligent people to claim that the sophisticated English practiced in Oxbridge common rooms cut the world at the joints. Although Ernest Gellner (1959) provided the consummate take-down of the movement—to much fanfare in the media at the time—ordinary language philosophy persisted well into the 1980s, along the way influencing the style of ethnomethodology that filtered into STS. (Cue the corpus of Michael Lynch.)

Ontology was effectively reduced to a reification of the things that the people in the room were talking about and the relations predicated of them. And where the likes of JL Austin and PF Strawson spoke of ‘grammatical usage’, Latour and his followers refer to ‘semiotic network’, largely to avoid the anthropomorphism from which the ordinary language philosophers had suffered—alongside their ethnocentrism. Nevertheless, both the ordinary language folks and Latour think they’re doing an empirically informed metaphysics, even though they’re really just eavesdropping on themselves and the people in whose company they’ve been recently kept. Latour (1992) is the classic expression of STS self-eavesdropping, as our man Bruno meditates on the doorstop, the seatbelt, the key and other mundane technologies with which he can never quite come to terms, which results in his life becoming one big ethnomethodological ‘breaching experiment’.

All of this is a striking retreat from STS’s original commitment to the Edinburgh School’s ‘symmetry principle’, which was presented as an intervention in epistemology rather than ontology. In this guise STS was seen as threatening rather than merely complementing the established normative order because the symmetry principle, notwithstanding its vaunted neutrality, amounted to a kind of judgemental relativism, whereby ‘winning’ in science was downgraded to a contingent achievement, which could have been—and might still be—reversed under different circumstances. This was the spirit in which Shapin and Schaffer (1985) appeared to be such a radical book: It had left the impression that the truth is no more than the binding outcome of a trial of people and things: that is, a ‘game’ in its full and demystified sense.

While I have always found this position problematic as an end in itself, it is nonetheless a great opening move to acquire an alternative normative horizon from that offered by the scientific establishment, since it basically amounts to an ‘equal time’ doctrine in an arena where opponents are too easily mischaracterised and marginalised, if not outright silenced by being ‘consigned to the dustbin of history’. Indeed, as Kuhn had recognized, the harder the science, the clearer the distinction between the discipline and its history.

However, this normative animus began to disappear from STS once Latour’s actor-network theory became the dominant school around the time of the Science Wars in the mid-1990s. It didn’t take long before STS had become supine to the establishment, exemplified by Latour (2004)’s uncritical acceptance of the phrase ‘artificially maintained controversies’, which no doubt meets with the approval of Eric Baker and Naomi Oreskes (Baker and Oreskes 2017). For my own part, when I first read Latour (2004), I was reminded of Donald Rumsfeld’s phrase from the same period, albeit in the context of France’s refusal to support the Iraq War: ‘cheese-eating surrender monkey’.

Nevertheless, Latour’s surrender has stood STS in good stead, rendering it a reliable reflector of all that it observes. But make no mistake: Despite the radical sounding rhetoric of ‘missing masses’ and ‘parliament of things’, STS in the Latourian moment follows closely in the footsteps of ordinary language philosophy, which enthusiastically subscribed to the Wittgensteinian slogan of ‘leaving the world alone’. The difference is that whereas the likes of Austin and Strawson argued that our normal ways of speaking contain many more insights into metaphysics than philosophers had previously recognized, Latour et al. show that taking seriously what appears before our eyes makes the social world much more complicated than sociologists had previously acknowledged. But the lesson is the same in both cases: Carry on treating the world as you find it as ultimate reality—simply be more sensitive to its nuances.

It is worth observing that ordinary language philosophy and actor-network theory, notwithstanding their own idiosyncrasies and pretensions, share a disdain for a kind of philosophy or sociology, respectively, that adopts a ‘second order’ perspective on its subject matter. In other words, they were opposed to what Strawson called ‘revisionary metaphysics’, an omnibus phrase that was designed to cover both German idealism and logical positivism, the two movements that did the most to re-establish the epistemic authority of academics in the modern era. Similarly, Latour’s hostility to a science of sociology in the spirit of Emile Durkheim is captured in the name he chose for his chair at Sciences Po, Gabriel Tarde, the magistrate who moved into academia and challenged Durkheim’s ontologically closed sense of sociology every step of the way. In both cases, the moves are advertised as democratising but in practice they’re parochialising, since those hidden nuances and missing masses are supposedly provided by acts of direct acquaintance.

Cue Sismondo (2017), who as editor of the journal Social Studies of Science operates in a ‘Latour Lite’ mode: that is, all of the method but none of the metaphysics. First, he understands ‘post-truth’ in the narrowest possible context, namely, as proposed by those who gave the phenomenon its name—and negative spin—to make it 2016 Oxford English Dictionary word of the year. Of course, that’s in keeping with the Latourian dictum of ‘Follow the agents’. But it is also to accept the agents’ categories uncritically, even if it means turning a blind eye to STS’s own role in promoting the epistemic culture responsible for ‘post-truth’, regardless of the normative value that one ultimately places on the word.

Interestingly, Sismondo is attacked on largely the same grounds by someone with whom I normally disagree, namely, Harry Collins (Collins, Evans, Weinel 2017). Collins and I agree that STS naturally lends itself to a post-truth epistemology, a fact that the field avoids at its peril. However, I believe that STS should own post-truth as a feature of the world that our field has helped to bring about—to be sure, not ex nihilo but by creatively deploying social and epistemological constructivism in an increasingly democratised context. In contrast, while Collins concedes that STS methods can be used even by our political enemies, he calls on STS to follow his own example by using its methods to demonstrate that ‘expert knowledge’ makes an empirical difference to the improvement of judgement in a variety of arenas. As for the politically objectionable uses of STS methods, here Collins and I agree that they are worth opposing but an adequate politics requires a different kind of work from STS research.

In response to all this, Sismondo retreats to STS’s official self-understanding as a field immersed the detailed practices of all that it studies—as opposed to those post-truth charlatans who simply spin words to create confusion. But the distinction is facile and perhaps disingenuous. The clearest manifestation that STS attends to the details of technoscientific practice is the complexity—or, less charitably put, complication—of its own language.  The social world comes to be populated by so many entities, properties and relations simply because STS research is largely in business of naming and classifying things, with an empiricist’s bias towards treating things that appear different to be really different. It is this discursive strategy that results in the richer ontology that one typically finds in STS articles, which in turn is supposed to leave the reader with the sense that the STS researcher has a deeper and more careful understanding of what s/he has studied. But in the end, it is just a discursive strategy, not a mathematical proof. There is a serious debate to be had about whether the field’s dedication to detail—‘ontological inventory work’—is truly illuminating or obfuscating. However, it does serve to establish a kind of ‘expertise’ for STS.

Why Science Has Never Had Need for Consensus—But Got It Anyway

My double question to anyone who wishes to claim a ‘scientific consensus’ on anything is on whose authority and on what basis such a statement is made. Even that great defender of science, Karl Popper, regarded scientific facts as no more than conventions, agreed mainly to mark temporary settlements in an ongoing journey. Seen with a rhetorician’s eye, a ‘scientific consensus’ is demanded only when scientific authorities feel that they are under threat in a way that cannot be dismissed by the usual peer review processes. ‘Science’ after all advertises itself as the freest inquiry possible, which suggests a tolerance for many cross-cutting and even contradictory research directions, all compatible with the current evidence and always under review in light of further evidence. And to a large extent, science does demonstrate this spontaneous embrace of pluralism, albeit with the exact options on the table subject to change. To be sure, some options are pursued more vigorously than others at any given moment. Scientometrics can be used to chart the trends, which may make the ‘science watcher’ seem like a stock market analyst. But this is more ‘wisdom of crowds’ stuff than a ‘scientific consensus’, which is meant to sound more authoritative and certainly less transient.

Indeed, invocations of a ‘scientific consensus’ become most insistent on matters which have two characteristics, which are perhaps necessarily intertwined but, in any case, take science outside of its juridical comfort zone of peer review: (1) they are inherently interdisciplinary; (2) they are policy-relevant. Think climate change, evolution, anything to do with health. A ‘scientific consensus’ is invoked on just these matters because they escape the ‘normal science’ terms in which peer review operates. To a defender of the orthodoxy, the dissenters appear to be ‘changing the rules of science’ simply in order to make their case seem more plausible. However, from the standpoint of the dissenter, the orthodoxy is artificially restricting inquiry in cases where reality doesn’t fit its disciplinary template, and so perhaps a change in the rules of science is not so out of order.

Here it is worth observing that defenders of the ‘scientific consensus’ tend to operate on the assumption that to give the dissenters any credence would be tantamount to unleashing mass irrationality in society. Fortified by the fledgling (if not pseudo-) science of ‘memetics’, they believe that an anti-scientific latency lurks in the social unconscious. It is a susceptibility typically fuelled by religious sentiments, which the dissenters threaten to awaken, thereby reversing all that modernity has achieved.

I can’t deny that there are hints of such intent in the ranks of dissenters. One notorious example is the Discovery Institute’s ‘Wedge document’, which projected the erosion of ‘methodological naturalism’ as the ‘thin edge of the wedge’ to return the US to its Christian origins. Nevertheless, the paranoia of the orthodoxy underestimates the ability of modernity—including modern science—to absorb and incorporate the dissenters, and come out stronger for it. The very fact that intelligent design theory has translated creationism into the currency of science by leaving out the Bible entirely from its argumentation strategy should be seen as evidence for this point. And now Darwinists need to try harder to defeat it, which we see in their increasingly sophisticated refutations, which often end up with Darwinists effectively conceding points and simply admitting that they have their own way of making their opponents’ points, without having to invoke an ‘intelligent designer’.

In short, my main objection to the concept of a ‘scientific consensus’ is that it is epistemologically oversold. It is clearly meant to carry more normative force than whatever happens to be the cutting edge of scientific fashion this week. Yet, what is the life expectancy of the theories around which scientists congregate at any given time?  For example, if the latest theory says that the planet is due for climate meltdown within fifty years, what happens if the climate theories themselves tend to go into meltdown after about fifteen years? To be sure, ‘meltdown’ is perhaps too strong a word. The data are likely to remain intact and even be enriched, but their overall significance may be subject to radical change. Moreover, this fact may go largely unnoticed by the general public, as long as the scientists who agreed to the last consensus are also the ones who agree to the next consensus. In that case, they can keep straight their collective story of how and why the change occurred—an orderly transition in the manner of dynastic succession.

What holds this story together—and is the main symptom of epistemic overselling of scientific consensus—is a completely gratuitous appeal to the ‘truth’ or ‘truth-seeking’ (aka ‘veritism’) as somehow underwriting this consensus. Baker and Oreskes’ (2017) argument is propelled by this trope. Yet, interestingly early on even they refer to ‘attempts to build public consensus about facts or values’ (my emphasis). This turn of phrase comports well with the normal constructivist sense of what consensus is. Indeed, there is nothing wrong with trying to align public opinion with certain facts and values, even on the grand scale suggested by the idea of a ‘scientific consensus’. This is the stuff of politics as usual. However, whatever consensus is thereby forged—by whatever means and across whatever range of opinion—has no ‘natural’ legitimacy. Moreover, it neither corresponds to some pre-existent ideal of truth nor is composed of some invariant ‘truth stuff’ (cf. Fuller 1988: chap. 6). It is a social construction, full stop. If the consensus is maintained over time and space, it will not be due to its having been blessed and/or guided by ‘Truth’; rather it will be the result of the usual social processes and associated forms of resource mobilization—that is, a variety of external factors which at crucial moments impinge on the play of any game.

The idea that consensus enjoys some epistemologically more luminous status in science than in other parts of society (where it might be simply dismissed as ‘groupthink’) is an artefact of the routine rewriting of history that scientists do to rally their troops. As Kuhn long ago observed, scientists exaggerate the degree of doctrinal agreement to give forward momentum to an activity that is ultimately held together simply by common patterns of disciplinary acculturation and day-to-day work practices. Nevertheless, Kuhn’s work helped to generate the myth of consensus. Indeed, in my Cambridge days studying with Mary Hesse (circa 1980), the idea that an ultimate consensus on the right representation of reality might serve as a transcendental condition for the possibility of scientific inquiry was highly touted, courtesy of the then fashionable philosopher Jürgen Habermas, who flattered his Anglophone fans by citing Charles Sanders Peirce as his source for the idea. Yet even back then I was of a different mindset.

Under the influence of Foucault, Derrida and social constructivism (which were circulating in more underground fashion), as well as what I had already learned about the history of science (mainly as a student of Loren Graham at Columbia), I deemed the idea of a scientific consensus to reflect a secular ‘god of the gaps’ style of wishful thinking. Indeed I devoted a chapter of my Ph.D. on the ‘elusiveness’ of consensus in science, which was the only part of the thesis that I incorporated in Social Epistemology (Fuller 1988: chap. 9). It is thus very disappointing to see Baker and Oreskes continuing to peddle Habermas’ brand of consensus mythology, even though for many of us it had fallen still born from the presses more than three decades ago.

A Gaming Science Is a Free Science

Baker and Oreskes (2017) are correct to pick up on the analogy drawn by David Bloor between social constructivism’s scepticism with regard to transcendent conceptions of truth and value and the scepticism that the Austrian school of economics (and most economists generally) show to the idea of a ‘just price’, understood as some normative ideal that real prices should be aiming toward. Indeed, there is more than an analogy here. Alfred Schutz, teacher of Peter Berger and Thomas Luckmann of The Social Construction of Reality fame, was himself a member of the Mises Circle in Vienna, having been trained by him the law faculty. Market transactions provided the original template for the idea of ‘social construction’, a point that is already clear in Adam Smith.

However, in criticizing Bloor’s analogy, Baker and Oreskes miss a trick: When the Austrians and other economists talk about the normative standing of real prices, their understanding of the market is somewhat idealized; hence, one needs a phrase like ‘free market’ to capture it. This point is worth bearing in mind because it amounts to a competing normative agenda to the one that Baker and Oreskes are promoting. With the slow ascendancy of neo-liberalism over the second half of the twentieth century, that normative agenda became clear—namely, to make markets free so that real prices can prevail.

Here one needs to imagine that in such a ‘free market’ there is a direct correspondence between increasing the number of suppliers in the market and the greater degree of freedom afforded to buyers, as that not only drives the price down but also forces buyers to refine their choice. This is the educative function performed by markets, an integral social innovation in terms of the Enlightenment mission advanced by Smith, Condorcet and others in the eighteenth century (Rothschild 2002). Markets were thus promoted as efficient mechanisms that encourage learning, with the ‘hand’ of the ‘invisible hand’ best understood as that of an instructor. In this context, ‘real prices’ are simply the actual empirical outcomes of markets under ‘free’ conditions. Contra Baker and Oreskes, they don’t correspond to some a priori transcendental realm of ‘just prices’.

However, markets are not ‘free’ in the requisite sense as long as the state strategically blocks certain spontaneous transactions, say, by placing tariffs on suppliers other than the officially licensed ones or by allowing a subset of market agents to organize in ways that enable them to charge tariffs to outsiders who want access. In other words, the free market is not simply about lower taxes and fewer regulations. It is also about removing subsidies and preventing cartels. It is worth recalling that Adam Smith wrote The Wealth of Nations as an attack on ‘mercantilism’, an economic system not unlike the ‘socialist’ ones that neo-liberalism has tried to overturn with its appeal to the ‘free market’. In fact, one of the early neo-liberals (aka ‘ordo-liberals’), Alexander Rüstow, coined the phrase ‘liberal interventionism’ in the 1930s for the strong role that he saw for the state in freeing the marketplace, say, by breaking up state-protected monopolies (Jackson 2009).

Capitalists defend private ownership only as part of the commodification of capital, which in turn, allows trade to occur. Capitalists are not committed to an especially land-oriented approach to private property, as in feudalism, which through, say, inheritance laws restricts the flow of capital in order to stabilise the social order. To be sure, capitalism requires that traders know who owns what at any given time, which in turn supports clear ownership signals. However, capitalism flourishes only if the traders are inclined to part with what they already own to acquire something else. After all, wealth cannot grow if capital doesn’t circulate. The state thus serves capitalism by removing the barriers that lead people to accept too easily their current status as an adaptive response to situations that they regard as unchangeable. Thus, liberalism, the movement most closely aligned with the emerging capitalist sensibility, was originally called ‘radical’—from the Latin for ‘root’—as it promised to organize society according to humanity’s fundamental nature, the full expression of which was impeded by existing regimes, which failed to allow everyone what by the twentieth century would be called ‘equal opportunity’ in life (Halevy 1928).

I offer this more rounded picture of the normative agenda of free market thinkers because Baker and Oreskes engage in a rhetorical sleight of hand associated with the capitalists’ original foes, the mercantilists. It involves presuming that the public interest is best served by state authorised producers (of whatever). Indeed, when one speaks of the early modern period in Europe as the ‘Age of Absolutism’, this elision of the state and the public is an important part of what is meant. True to its Latin roots, the ‘state’ is the anchor of stability, the stationary frame of reference through which everything else is defined. Here one immediately thinks of Newton, but metaphysically more relevant was Hobbes whose absolutist conception of the state aimed to incarnate the Abrahamic deity in human form, the literal body of which is the body politic.

Setting aside the theology, mercantilism in practice aimed to reinvent and rationalize the feudal order for the emerging modern age, one in which ‘industry’ was increasingly understood as not a means to an end but an end in itself—specifically, not simply a means to extract the fruits of nature but an expression of human flourishing. Thus, political boundaries on maps started to be read as the skins of superorganisms, which by the nineteenth century came to be known as ‘nation-states’. In that case, the ruler’s job was not simply to keep the peace over what had been largely self-managed tracts of land, but rather to ‘organize’ them so that they functioned as a single productive unit, what we now call the ‘economy’, whose first theorization was as ‘physiocracy’. The original mercantilist policy involved royal licenses that assigned exclusive rights to a ‘domain’ understood in a sense that was not restricted to tracts of land, but extended to wealth production streams in general. To be sure, over time these rights were attenuated into privileges and subsidies, which allowed for some competition but typically on an unequal basis.

In contrast, capitalism’s ‘liberal’ sensibility was about repurposing the state’s power to prevent the rise of new ‘path dependencies’ in the form of, say, a monopoly in trade based on an original royal license renewed in perpetuity, which would only serve to reduce the opportunities of successive generations. It was an explicitly anti-feudal policy. The final frontier to this policy sensibility is academia, which has long been acknowledged to be structured in terms of what Robert Merton called the principle of ‘cumulative advantage’, the sources of which are manifold and, to a large extent, mutually reinforcing. To list just a few: (1) state licenses issued to knowledge producers, starting with the Charter of the Royal Society of London, which provided a perpetually protected space for a self-organizing community to do as they will within originally agreed constraints; (2) Kuhn-style paradigm-driven normal science, which yields to a successor paradigm only out of internal collapse, not external competition; (3) the anchoring effect of early academic training on subsequent career advancement, ranging from jobs to grants; (4) the evaluation of academic work in terms of a peer review system whose remit extends beyond catching errors to judging relevance to preferred research agendas; (5) the division of knowledge into ‘fields’ and ‘domains’, which supports a florid cartographic discourse of ‘boundary work’ and ‘boundary maintenance’.

The list could go on, but the point is clear to anyone with eyes to see: Even in these neo-liberal times, academia continues to present its opposition to neo-liberalism in the sort of neo-feudal terms that would have pleased a mercantilist. Lineage is everything, whatever the source of ancestral entitlement. Merton’s own attitude towards academia’s multiple manifestations of ‘cumulative advantage’ seemed to be one of ambivalence, though as a sociologist he probably wasn’t sufficiently critical of the pseudo-liberal spin put on cumulative advantage as the expression of the knowledge system’s ‘invisible hand’ at work—which seems to be Baker and Oreskes’ default position as defenders of the scientific status quo. However, their own Harvard colleague, Alex Csiszar (2017) has recently shown that Merton recognized that the introduction of the scientometrics in the 1960s—in the form of the Science Citation Index—made academia susceptible to a tendency that he had already identified in bureaucracies, ‘goal displacement’, whereby once a qualitative goal is operationalized in terms of a quantitative indicator, there is an incentive to work toward the indicator, regardless of its actual significance for achieving the original goal. Thus, the cumulative effect of high citation counts become surrogates for ‘truth’ or some other indicator-transcendent goal. In this real sense, what is at best the wisdom of the scientific crowd is routinely mistaken for an epistemically luminous scientific consensus.

As I pointed out in Fuller (2017), which initiated this recent discussion of ‘science as game’, a great virtue of the game idea is its focus on the reversibility of fortunes, as each match matters, not only to the objective standing of the rival teams but also to their subjective sense of momentum. Yet, from their remarks about intelligent design theory, Baker and Oreskes appear to believe that the science game ends sooner than it really does: After one or even a series of losses, a team should simply pack it in and declare defeat. Here it is worth recalling that the existence of atoms and the relational character of space-time—two theses associated with Einstein’s revolution in physics—were controversial if not deemed defunct for most of the nineteenth century, notwithstanding the problems that were acknowledged to exist in fully redeeming the promises of the Newtonian paradigm. Indeed, for much of his career, Ernst Mach was seen as a crank who focussed too much on the lost futures of past science, yet after the revolutions in relativity and quantum mechanics his reputation flipped and he became known for his prescience. Thus, the Vienna Circle that spawned the logical positivists was named in Mach’s honour.

Similarly intelligent design may well be one of those ‘controversial if not defunct’ views that will be integral to the next revolution in biology, since even biologists whom Baker and Oreskes probably respect admit that there are serious explanatory gaps in the Neo-Darwinian synthesis.[1] That intelligent design advocates have improved the scientific character of their arguments from their creationist origins—which I am happy to admit—is not something for the movement’s opponents to begrudge. Rather it shows that they learn from their mistakes, as any good team does when faced with a string of losses. Thus, one should expect an improvement in their performance. Admittedly these matters become complicated in the US context, since the Constitution’s separation of church and state has been interpreted in recent times to imply the prohibition of any teaching material that is motivated by specifically religious interests, as if the Founding Fathers were keen on institutionalising the genetic fallacy! Nevertheless, this blinkered interpretation has enabled the likes of Baker and Oreskes to continue arguing with earlier versions of ‘intelligent design creationism’, very much like generals whose expertise lies in having fought the previous war. But luckily, an increasingly informed public is not so easily fooled by such epistemically rearguard actions.

References

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

Collins, Harry, Robert Evans, Martin Weinel. “STS as Science or Politics?” Social Studies of Science.  47, no. 4 (2017): 580–586.

Csiszar, Alex. “From the Bureaucratic Virtuoso to Scientific Misconduct: Robert K. Merton, Robert and Eugene Garfield, and Goal Displacement in Science.” Paper delivered to annual meeting of the History of Science Society. Toronto: 9-12 November 2017.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2000.

Fuller, Steve. “Is STS All Talk and No Walk?” EASST Review 36 no. 1 (2017): https://easst.net/article/is-sts-all-talk-and-no-walk/.

Gellner, Ernest. Words and Things. London: Routledge, 1959.

Halevy, Elie. The Growth of Philosophic Radicalism. London: Faber and Faber, 1928.

Jackson, Ben. “At the Origins of Neo-Liberalism: The Free Economy and the Strong State, 1930-47.” Historical Journal 53, no. 1 (2010): 129-51.

Latour, Bruno. “Where are the Missing Masses? The Sociology of a Few Mundane Artefacts.” In Shaping Technology/Building Society, edited by Wiebe E. Bijker and John Law, 225-258. Cambridge MA: MIT Press. 1992

Latour, Bruno. ‘Why has critique run out of steam? From matters of fact to matters of concern’. Critical Inquiry 30, no. 2 (2004): 225–248.

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

Rothschild, Emma. Economic Sentiments. Cambridge MA: Harvard University Press, 2002.

Serres, Michel. and Bruno Latour. Conversations on Science, Culture, and Time. Ann Arbor: University of Michigan Press, 1995.

Schaffer, Simon and Steven Shapin. Leviathan and the Air-Pump. Princeton: Princeton University Press, 1985.

Sismondo, Sergio. “Not a Very Slippery Slope: A Reply to Fuller.” EASST Review 36, no. 2 (2017): https://easst.net/article/not-a-very-slippery-slope-a-reply-to-fuller/.

[1] Surprisingly for people who claim to be historians of science, Baker and Oreskes appear to have fallen for the canard that only Creationists mention Darwin’s name when referring to contemporary evolutionary theory. In fact, it is common practice among historians and philosophers of science to invoke Darwin to refer to his specifically purposeless conception of evolution, which remains the default metaphysical position of contemporary biologists—albeit one maintained with increasing conceptual and empirical difficulty. Here it is worth observing that such leading lights of the Discovery Institute as Stephen Meyer and Paul Nelson were trained in the history and philosophy of science, as was I.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Counterfactuals in the White House:  A Glimpse into Our Post-Truth Times.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3z1

Image credit: OZinOH, via flickr

May Day 2017 was filled with reporting and debating over a set of comments that US President Trump made while visiting Andrew Jackson’s mansion, the ‘Hermitage’, now a tourist attraction in Nashville, Tennessee. Trump said that had Jackson been deployed, he could have averted the US Civil War. Since Jackson had died about fifteen years before the war started, Trump was clearly making a counterfactual claim. However, it is an interesting claim—not least for its responses, which were fast and furious. They speak to the nature of our times.  Let me start with the academic response and then move to how I think about the matter. A helpful compendium of the responses is here.

Jim Grossman of the American Historical Association spoke for all by claiming that Trump ‘is starting from the wrong premise’. Presumably, Grossman means that the Civil War was inevitable because slavery is so bad that a war over it was inevitable. However well he meant this comment, it feeds into the anti-expert attitude of our post-truth era. Grossman seems to disallow Trump from imagining that preserving the American union was more important than the end of slavery—even though that was exactly how the issue was framed to most Americans 150 years ago. Scholarship is of course mainly about explaining why things happened the way they did. However, there is a temptation to conclude that it necessarily had to happen that way. Today’s post-truth culture attempts to curb this tendency. In any case, once the counterfactual door is open to other possible futures, historical expertise becomes more contestable, perhaps even democratised. The result may be that even when non-experts reach the same conclusion as the experts, it may be for importantly different reasons.

Who was Andrew Jackson?

Andrew Jackson is normally regarded as one of the greatest US presidents, whose face is regularly seen on the twenty-dollar banknote. He was the seventh president and the first one who was truly ‘self-made’ in the sense that he was not well educated, let alone oriented towards Europe in his tastes, as had been his six predecessors. It would not be unfair to say that he was the first President who saw a clear difference between being American and being European. In this respect, his self-understanding was rather like that of the heroes of Latin American independence. He was also given to an impulsive manner of public speech, not so different from the current occupant of the Oval Office.

Jackson volunteered at age thirteen to fight in the War of Independence from Britain, which was the first of many times when he was ready to fight for his emerging nation. Over the past fifty years much attention has been paid to his decimation of native American populations at various points in his career, both military and presidential, as well as his support for slavery. (Howard Zinn was largely responsible, at least at a popular level, for this recent shift in focus.) To make a long and complicated story short, Jackson was rather consistent in acting in ways that served to consolidate American national identity, even if that meant sacrificing the interests of various groups at various times—groups that arguably never recovered from the losses inflicted on them.

Perhaps Jackson’s most lasting positive legacy has been the current two-party—Democratic/Republican—political structure. Each party cuts across class lines and geographical regions. This achievement is now easy to underestimate—as the Democratic Party is now ruing. The US founding fathers were polarized about the direction that the fledgling nation should take, precisely along these divides. The struggles began in Washington’s first administration between his treasury minister Alexander Hamilton and his foreign minister Thomas Jefferson—and they persisted. Both Hamilton and Jefferson oriented themselves to Europe, Hamilton more in terms of what to imitate and Jefferson in terms of what to avoid. Jackson effectively performed a Gestalt switch, in which Europe was no longer the frame of reference for defining American domestic and foreign policy.

Enter Trump

Now enter Donald Trump, who says Jackson could have averted the Civil War, which by all counts was one of the bloodiest in US history, with an estimated two million lives in total lost. Jackson was clearly a unionist but also clearly a slaveholder. So one imagines that Jackson would have preserved the union by allowing slaveholding, perhaps in terms of some version of the ‘states rights’ or ‘popular sovereignty’ doctrine, which gives states discretion over how they deal with economic matters. It’s not unreasonable that Jackson could have pulled that off, especially because the economic arguments for allowing slavery were stronger back then than they are now normally remembered.

The Nobel Prize winning economic historian Robert Fogel explored this point quite thoroughly more than forty years ago in his controversial Time on the Cross. It is not a perfect work, and its academic criticism is quite instructive about how one might improve exploring a counterfactual world in which slavery would have persisted in the US until it was no longer economically viable. Unfortunately, the politically sensitive nature of the book’s content has discouraged any follow-up. When I first read Fogel, I concluded that over time the price of slaves would come to approximate that of free labour considered over a worker’s lifetime. In other words, a slave economy would evolve into a capitalist economy without violence in the interim. Slaveholders would simply respond to changing market conditions. So, the moral question is whether it would have made sense to extend slavery over a few years before it would end up merging with what the capitalist world took to be an acceptable way of being, namely, wage labour. Fogel added ballast to his argument by observing that slaves tend to live longer and healthier lives than freed Blacks.

Moreover, Fogel’s counterfactual was not fanciful. Some version of the states rights doctrine was the dominant sentiment in the US prior to the Civil War. However, there were many different versions of the doctrine which could not rally around a common spokesperson. This allowed the clear unitary voice for abolition emanating from the Christian dissenter community in the Northern states to exert enormous force, not least on the sympathetic and ambitious country lawyer, Abraham Lincoln, who became their somewhat unlikely champion. Thus, 1860 saw a Republican Party united around Lincoln fend off three Democrat opponents in the general election.

None of this is to deny that Lincoln was right in what he did. I would have acted similarly. Moreover, he probably did not anticipate just how bloody the Civil War would turn out to be—and the lasting scars it would leave on the American psyche. But the question on the table is not whether the Civil War was a fair price to pay to end slavery. Rather, the question is whether the Civil War could have been avoided—and, more to the point of Trump’s claim, whether Jackson would have been the man to do it. The answer is perhaps yes. The price would have been that slavery would have been extended for a certain period before it became economically unviable for the slaveholders.

It is worth observing that Fogel’s main target seemed to be Marxists who argued that slavery made no economic sense and that it persisted in the US only because of racist ideology.  Fogel’s response was that slaveholders probably were racist, but such a de facto racist economic regime would not have persisted as long as it did, had both sides not benefitted from the arrangement. In other words, the success of the anti-slavery campaign was largely about the triumph of aspirational ideas over actual economic conditions. If anything, its success testifies to the level of risk that abolitionists were willing to assume on behalf of American society for the emancipation of slaves. Alexis de Tocqueville was only the most famous of foreign US commentators to notice this at the time. Abolitionists were the proactionaries of their day with regard to risk. And this is how we should honour them now.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).

Shortlink: http://wp.me/p1Bfg0-3yV

Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the University of Warwick. He is the author of more than twenty books, the next of which is Post-Truth: Knowledge as a Power Game (Anthem).

Shortlink: http://wp.me/p1Bfg0-3yI

Note: This article originally appeared in the EASST Review 36(1) April 2017 and is republished below with the permission of the editors.

Image credit: Hans Luthart, via flickr

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public—if not outright invented—at least four common post-truth tropes:

1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.

2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.

3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.

4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable—but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game—perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.

References

Fuller, Steve. ‘Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.’ Social Epistemology Review and Reply Collective December, 2016: http://wp.me/p1Bfg0-3nx.

Latour, Bruno. ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.’ Critical Inquiry 30, no. 2 (2004) : 225–248.

Oreskes, Naomi and Erik M. Conway Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2011.

Sismondo, Sergio. ‘Post-Truth?’ Social Studies of Science 47, no. 1 (2017): 3-6.