Archives For Steve Fuller

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3JC

Please refer to:

Image credit: PGuiri, via flickr

What follows is an omnibus reply to various pieces that have been recently written in response to Fuller (2017), where I endorsed the post-truth idea of science as a game—an idea that I take to have been a core tenet of science and technology studies (STS) from its inception. The article is organized along conceptual lines, taking on Phillips (2017), Sismondo (2017) and Baker and Oreskes (2017) in roughly that order, which in turn corresponds to the degree of sympathy (from more to less) that the authors have with my thesis.

What It Means to Take Games Seriously

Amanda Phillips (2017) has written a piece that attempts to engage with the issues I raised when I encouraged STS to own the post-truth condition, which I take to imply that science in some deep sense is a ‘game’. What she writes is interesting but a bit odd, since in the end she basically proposes STS’s current modus operandi as if it were a new idea.  But we’ve already seen Phillips’ future, and it doesn’t work. But she’s far from alone, as we shall see.

On the game metaphor itself, some things need to be said. First of all, I take it that Phillips largely agrees with me that the game metaphor is appropriate to science as it is actually conducted. Her disagreement is mainly with my apparent recommendation that STS follow suit. She raises the introduction of the mortar kick into US football, which stays within the rules but threatens player safety. This leads her to conclude that the mortar kick debases/jeopardizes the spirit of the game. I may well agree with her on this point, which she wishes to present as akin to a normative stance appropriate to STS.  However, I cannot tell for sure, just given the evidence she provides. I’d also like to see whether she would have disallowed past innovations that changed the play of the game—and, if so, which ones. In other words, I need a clearer sense of what she takes to be the ‘spirit of the game’, which involves inter alia judgements about tolerable risks over a period of time.

To be sure, judicial decisions normally have this character. Sometimes judges issue ‘landmark decisions’ which may invalidate previous judges’ rulings but, in any case, set a precedent on the basis of which future decisions should be made. Bringing it back to the case at hand, Phillips might say that football has been violating its spirit for a long time and that not only should the mortar kick be prohibited but so too some other earlier innovations. (In US Constitutional law, this would be like the history of judicial interpretation of citizen rights following the passage of the Fourteenth Amendment, at least starting with Brown v. Board Education.) Of course, Phillips might instead give a more limited ruling that simply claims that the mortar kick is a step too far in the evolution of the game, which so far has stayed within its spirit. Or, she might simply judge the mortar kick to be within the spirit of the game, full stop. The arguments used to justify any of these decisions would be an exercise in elucidating what the ‘spirit of the game’ means.

I do not wish to be persnickety but to raise a point about what it means to think about science as a game. It means, at the very least, that science is prima facie an autonomous activity in the sense of having clear boundaries. Just as one knows when one is playing or not playing football, one knows when one is or is not doing science.  Of course, the impact

that has on the rest of society is an open question. For example, once dedicated schools and degree programmes were developed to train people in ‘science’ (and here I mean the term in its academically broadest sense, Wissenschaft), especially once they acquired the backing and funding of nation-states, science became the source of ultimate epistemic authority in virtually all policy arenas. This was something that really only began to happen in earnest in the second half of the nineteenth century.

Similarly, one could imagine a future history of football, perhaps inspired by the modern Olympics, in which larger political units acquire an interest in developing the game as a way of resolving their own standing problems that might otherwise be handled with violence, sometimes on a mass scale. In effect, the Olympics would be a regularly scheduled, sublimated version of a world war. In that possible world, football—as one of the represented sports—would come to perform the functions for which armed conflict is now used. Here sports might take inspiration from the various science ‘races’ in which the Cold War was conducted—notably the race to the Moon—was a highly successful version of this strategy in real life, as it did manage to avert a global nuclear war. Its intellectual residue is something that we still call ‘game theory’.

But Phillips’ own argument doesn’t plumb the depths of the game metaphor in this way. Instead she has recourse to something she calls, inspired by Latour (2004), a ‘collective multiplicity of critical thought’. She also claims that STS hasn’t followed Latour on this point. As a matter of fact, STS has followed Latour almost religiously on this point, which has resulted in a diffusion of critical impact. The field basically amplifies consensus where it exists, showing how it has been maintained, and amplifies dissent where it exists, similarly showing how it has been maintained. In short, STS is simply the empirical shadow of the fields it studies. That’s really all that Latour ever meant by ‘following the actors’.

People forget that this is a man who follows Michel Serres in seeing the parasite as a role model for life (Serres and Latour 1995; cf. Fuller 2000: chap. 7). If STS seems ‘critical’, that’s only an unintended consequence of the many policy issues involving science and technology which remain genuinely unresolved. STS adds nothing to settle the normative standing of these matters. It simply elaborates them and in the process perhaps reminds people of what they might otherwise wish to forget or sideline. It is not a worthless activity but to accord it ‘critical’ in any meaningful sense would be to do it too much justice, as Latour (2004) himself realizes.

Have STSers Always Been Cheese-Eating Surrender Monkeys?

Notwithstanding the French accent and the Inspector Clouseau demeanour, Latour’s modus operandi is reminiscent of ordinary language philosophy, that intellectual residue of British imperialism, which in the mid-twentieth century led many intelligent people to claim that the sophisticated English practiced in Oxbridge common rooms cut the world at the joints. Although Ernest Gellner (1959) provided the consummate take-down of the movement—to much fanfare in the media at the time—ordinary language philosophy persisted well into the 1980s, along the way influencing the style of ethnomethodology that filtered into STS. (Cue the corpus of Michael Lynch.)

Ontology was effectively reduced to a reification of the things that the people in the room were talking about and the relations predicated of them. And where the likes of JL Austin and PF Strawson spoke of ‘grammatical usage’, Latour and his followers refer to ‘semiotic network’, largely to avoid the anthropomorphism from which the ordinary language philosophers had suffered—alongside their ethnocentrism. Nevertheless, both the ordinary language folks and Latour think they’re doing an empirically informed metaphysics, even though they’re really just eavesdropping on themselves and the people in whose company they’ve been recently kept. Latour (1992) is the classic expression of STS self-eavesdropping, as our man Bruno meditates on the doorstop, the seatbelt, the key and other mundane technologies with which he can never quite come to terms, which results in his life becoming one big ethnomethodological ‘breaching experiment’.

All of this is a striking retreat from STS’s original commitment to the Edinburgh School’s ‘symmetry principle’, which was presented as an intervention in epistemology rather than ontology. In this guise STS was seen as threatening rather than merely complementing the established normative order because the symmetry principle, notwithstanding its vaunted neutrality, amounted to a kind of judgemental relativism, whereby ‘winning’ in science was downgraded to a contingent achievement, which could have been—and might still be—reversed under different circumstances. This was the spirit in which Shapin and Schaffer (1985) appeared to be such a radical book: It had left the impression that the truth is no more than the binding outcome of a trial of people and things: that is, a ‘game’ in its full and demystified sense.

While I have always found this position problematic as an end in itself, it is nonetheless a great opening move to acquire an alternative normative horizon from that offered by the scientific establishment, since it basically amounts to an ‘equal time’ doctrine in an arena where opponents are too easily mischaracterised and marginalised, if not outright silenced by being ‘consigned to the dustbin of history’. Indeed, as Kuhn had recognized, the harder the science, the clearer the distinction between the discipline and its history.

However, this normative animus began to disappear from STS once Latour’s actor-network theory became the dominant school around the time of the Science Wars in the mid-1990s. It didn’t take long before STS had become supine to the establishment, exemplified by Latour (2004)’s uncritical acceptance of the phrase ‘artificially maintained controversies’, which no doubt meets with the approval of Eric Baker and Naomi Oreskes (Baker and Oreskes 2017). For my own part, when I first read Latour (2004), I was reminded of Donald Rumsfeld’s phrase from the same period, albeit in the context of France’s refusal to support the Iraq War: ‘cheese-eating surrender monkey’.

Nevertheless, Latour’s surrender has stood STS in good stead, rendering it a reliable reflector of all that it observes. But make no mistake: Despite the radical sounding rhetoric of ‘missing masses’ and ‘parliament of things’, STS in the Latourian moment follows closely in the footsteps of ordinary language philosophy, which enthusiastically subscribed to the Wittgensteinian slogan of ‘leaving the world alone’. The difference is that whereas the likes of Austin and Strawson argued that our normal ways of speaking contain many more insights into metaphysics than philosophers had previously recognized, Latour et al. show that taking seriously what appears before our eyes makes the social world much more complicated than sociologists had previously acknowledged. But the lesson is the same in both cases: Carry on treating the world as you find it as ultimate reality—simply be more sensitive to its nuances.

It is worth observing that ordinary language philosophy and actor-network theory, notwithstanding their own idiosyncrasies and pretensions, share a disdain for a kind of philosophy or sociology, respectively, that adopts a ‘second order’ perspective on its subject matter. In other words, they were opposed to what Strawson called ‘revisionary metaphysics’, an omnibus phrase that was designed to cover both German idealism and logical positivism, the two movements that did the most to re-establish the epistemic authority of academics in the modern era. Similarly, Latour’s hostility to a science of sociology in the spirit of Emile Durkheim is captured in the name he chose for his chair at Sciences Po, Gabriel Tarde, the magistrate who moved into academia and challenged Durkheim’s ontologically closed sense of sociology every step of the way. In both cases, the moves are advertised as democratising but in practice they’re parochialising, since those hidden nuances and missing masses are supposedly provided by acts of direct acquaintance.

Cue Sismondo (2017), who as editor of the journal Social Studies of Science operates in a ‘Latour Lite’ mode: that is, all of the method but none of the metaphysics. First, he understands ‘post-truth’ in the narrowest possible context, namely, as proposed by those who gave the phenomenon its name—and negative spin—to make it 2016 Oxford English Dictionary word of the year. Of course, that’s in keeping with the Latourian dictum of ‘Follow the agents’. But it is also to accept the agents’ categories uncritically, even if it means turning a blind eye to STS’s own role in promoting the epistemic culture responsible for ‘post-truth’, regardless of the normative value that one ultimately places on the word.

Interestingly, Sismondo is attacked on largely the same grounds by someone with whom I normally disagree, namely, Harry Collins (Collins, Evans, Weinel 2017). Collins and I agree that STS naturally lends itself to a post-truth epistemology, a fact that the field avoids at its peril. However, I believe that STS should own post-truth as a feature of the world that our field has helped to bring about—to be sure, not ex nihilo but by creatively deploying social and epistemological constructivism in an increasingly democratised context. In contrast, while Collins concedes that STS methods can be used even by our political enemies, he calls on STS to follow his own example by using its methods to demonstrate that ‘expert knowledge’ makes an empirical difference to the improvement of judgement in a variety of arenas. As for the politically objectionable uses of STS methods, here Collins and I agree that they are worth opposing but an adequate politics requires a different kind of work from STS research.

In response to all this, Sismondo retreats to STS’s official self-understanding as a field immersed the detailed practices of all that it studies—as opposed to those post-truth charlatans who simply spin words to create confusion. But the distinction is facile and perhaps disingenuous. The clearest manifestation that STS attends to the details of technoscientific practice is the complexity—or, less charitably put, complication—of its own language.  The social world comes to be populated by so many entities, properties and relations simply because STS research is largely in business of naming and classifying things, with an empiricist’s bias towards treating things that appear different to be really different. It is this discursive strategy that results in the richer ontology that one typically finds in STS articles, which in turn is supposed to leave the reader with the sense that the STS researcher has a deeper and more careful understanding of what s/he has studied. But in the end, it is just a discursive strategy, not a mathematical proof. There is a serious debate to be had about whether the field’s dedication to detail—‘ontological inventory work’—is truly illuminating or obfuscating. However, it does serve to establish a kind of ‘expertise’ for STS.

Why Science Has Never Had Need for Consensus—But Got It Anyway

My double question to anyone who wishes to claim a ‘scientific consensus’ on anything is on whose authority and on what basis such a statement is made. Even that great defender of science, Karl Popper, regarded scientific facts as no more than conventions, agreed mainly to mark temporary settlements in an ongoing journey. Seen with a rhetorician’s eye, a ‘scientific consensus’ is demanded only when scientific authorities feel that they are under threat in a way that cannot be dismissed by the usual peer review processes. ‘Science’ after all advertises itself as the freest inquiry possible, which suggests a tolerance for many cross-cutting and even contradictory research directions, all compatible with the current evidence and always under review in light of further evidence. And to a large extent, science does demonstrate this spontaneous embrace of pluralism, albeit with the exact options on the table subject to change. To be sure, some options are pursued more vigorously than others at any given moment. Scientometrics can be used to chart the trends, which may make the ‘science watcher’ seem like a stock market analyst. But this is more ‘wisdom of crowds’ stuff than a ‘scientific consensus’, which is meant to sound more authoritative and certainly less transient.

Indeed, invocations of a ‘scientific consensus’ become most insistent on matters which have two characteristics, which are perhaps necessarily intertwined but, in any case, take science outside of its juridical comfort zone of peer review: (1) they are inherently interdisciplinary; (2) they are policy-relevant. Think climate change, evolution, anything to do with health. A ‘scientific consensus’ is invoked on just these matters because they escape the ‘normal science’ terms in which peer review operates. To a defender of the orthodoxy, the dissenters appear to be ‘changing the rules of science’ simply in order to make their case seem more plausible. However, from the standpoint of the dissenter, the orthodoxy is artificially restricting inquiry in cases where reality doesn’t fit its disciplinary template, and so perhaps a change in the rules of science is not so out of order.

Here it is worth observing that defenders of the ‘scientific consensus’ tend to operate on the assumption that to give the dissenters any credence would be tantamount to unleashing mass irrationality in society. Fortified by the fledgling (if not pseudo-) science of ‘memetics’, they believe that an anti-scientific latency lurks in the social unconscious. It is a susceptibility typically fuelled by religious sentiments, which the dissenters threaten to awaken, thereby reversing all that modernity has achieved.

I can’t deny that there are hints of such intent in the ranks of dissenters. One notorious example is the Discovery Institute’s ‘Wedge document’, which projected the erosion of ‘methodological naturalism’ as the ‘thin edge of the wedge’ to return the US to its Christian origins. Nevertheless, the paranoia of the orthodoxy underestimates the ability of modernity—including modern science—to absorb and incorporate the dissenters, and come out stronger for it. The very fact that intelligent design theory has translated creationism into the currency of science by leaving out the Bible entirely from its argumentation strategy should be seen as evidence for this point. And now Darwinists need to try harder to defeat it, which we see in their increasingly sophisticated refutations, which often end up with Darwinists effectively conceding points and simply admitting that they have their own way of making their opponents’ points, without having to invoke an ‘intelligent designer’.

In short, my main objection to the concept of a ‘scientific consensus’ is that it is epistemologically oversold. It is clearly meant to carry more normative force than whatever happens to be the cutting edge of scientific fashion this week. Yet, what is the life expectancy of the theories around which scientists congregate at any given time?  For example, if the latest theory says that the planet is due for climate meltdown within fifty years, what happens if the climate theories themselves tend to go into meltdown after about fifteen years? To be sure, ‘meltdown’ is perhaps too strong a word. The data are likely to remain intact and even be enriched, but their overall significance may be subject to radical change. Moreover, this fact may go largely unnoticed by the general public, as long as the scientists who agreed to the last consensus are also the ones who agree to the next consensus. In that case, they can keep straight their collective story of how and why the change occurred—an orderly transition in the manner of dynastic succession.

What holds this story together—and is the main symptom of epistemic overselling of scientific consensus—is a completely gratuitous appeal to the ‘truth’ or ‘truth-seeking’ (aka ‘veritism’) as somehow underwriting this consensus. Baker and Oreskes’ (2017) argument is propelled by this trope. Yet, interestingly early on even they refer to ‘attempts to build public consensus about facts or values’ (my emphasis). This turn of phrase comports well with the normal constructivist sense of what consensus is. Indeed, there is nothing wrong with trying to align public opinion with certain facts and values, even on the grand scale suggested by the idea of a ‘scientific consensus’. This is the stuff of politics as usual. However, whatever consensus is thereby forged—by whatever means and across whatever range of opinion—has no ‘natural’ legitimacy. Moreover, it neither corresponds to some pre-existent ideal of truth nor is composed of some invariant ‘truth stuff’ (cf. Fuller 1988: chap. 6). It is a social construction, full stop. If the consensus is maintained over time and space, it will not be due to its having been blessed and/or guided by ‘Truth’; rather it will be the result of the usual social processes and associated forms of resource mobilization—that is, a variety of external factors which at crucial moments impinge on the play of any game.

The idea that consensus enjoys some epistemologically more luminous status in science than in other parts of society (where it might be simply dismissed as ‘groupthink’) is an artefact of the routine rewriting of history that scientists do to rally their troops. As Kuhn long ago observed, scientists exaggerate the degree of doctrinal agreement to give forward momentum to an activity that is ultimately held together simply by common patterns of disciplinary acculturation and day-to-day work practices. Nevertheless, Kuhn’s work helped to generate the myth of consensus. Indeed, in my Cambridge days studying with Mary Hesse (circa 1980), the idea that an ultimate consensus on the right representation of reality might serve as a transcendental condition for the possibility of scientific inquiry was highly touted, courtesy of the then fashionable philosopher Jürgen Habermas, who flattered his Anglophone fans by citing Charles Sanders Peirce as his source for the idea. Yet even back then I was of a different mindset.

Under the influence of Foucault, Derrida and social constructivism (which were circulating in more underground fashion), as well as what I had already learned about the history of science (mainly as a student of Loren Graham at Columbia), I deemed the idea of a scientific consensus to reflect a secular ‘god of the gaps’ style of wishful thinking. Indeed I devoted a chapter of my Ph.D. on the ‘elusiveness’ of consensus in science, which was the only part of the thesis that I incorporated in Social Epistemology (Fuller 1988: chap. 9). It is thus very disappointing to see Baker and Oreskes continuing to peddle Habermas’ brand of consensus mythology, even though for many of us it had fallen still born from the presses more than three decades ago.

A Gaming Science Is a Free Science

Baker and Oreskes (2017) are correct to pick up on the analogy drawn by David Bloor between social constructivism’s scepticism with regard to transcendent conceptions of truth and value and the scepticism that the Austrian school of economics (and most economists generally) show to the idea of a ‘just price’, understood as some normative ideal that real prices should be aiming toward. Indeed, there is more than an analogy here. Alfred Schutz, teacher of Peter Berger and Thomas Luckmann of The Social Construction of Reality fame, was himself a member of the Mises Circle in Vienna, having been trained by him the law faculty. Market transactions provided the original template for the idea of ‘social construction’, a point that is already clear in Adam Smith.

However, in criticizing Bloor’s analogy, Baker and Oreskes miss a trick: When the Austrians and other economists talk about the normative standing of real prices, their understanding of the market is somewhat idealized; hence, one needs a phrase like ‘free market’ to capture it. This point is worth bearing in mind because it amounts to a competing normative agenda to the one that Baker and Oreskes are promoting. With the slow ascendancy of neo-liberalism over the second half of the twentieth century, that normative agenda became clear—namely, to make markets free so that real prices can prevail.

Here one needs to imagine that in such a ‘free market’ there is a direct correspondence between increasing the number of suppliers in the market and the greater degree of freedom afforded to buyers, as that not only drives the price down but also forces buyers to refine their choice. This is the educative function performed by markets, an integral social innovation in terms of the Enlightenment mission advanced by Smith, Condorcet and others in the eighteenth century (Rothschild 2002). Markets were thus promoted as efficient mechanisms that encourage learning, with the ‘hand’ of the ‘invisible hand’ best understood as that of an instructor. In this context, ‘real prices’ are simply the actual empirical outcomes of markets under ‘free’ conditions. Contra Baker and Oreskes, they don’t correspond to some a priori transcendental realm of ‘just prices’.

However, markets are not ‘free’ in the requisite sense as long as the state strategically blocks certain spontaneous transactions, say, by placing tariffs on suppliers other than the officially licensed ones or by allowing a subset of market agents to organize in ways that enable them to charge tariffs to outsiders who want access. In other words, the free market is not simply about lower taxes and fewer regulations. It is also about removing subsidies and preventing cartels. It is worth recalling that Adam Smith wrote The Wealth of Nations as an attack on ‘mercantilism’, an economic system not unlike the ‘socialist’ ones that neo-liberalism has tried to overturn with its appeal to the ‘free market’. In fact, one of the early neo-liberals (aka ‘ordo-liberals’), Alexander Rüstow, coined the phrase ‘liberal interventionism’ in the 1930s for the strong role that he saw for the state in freeing the marketplace, say, by breaking up state-protected monopolies (Jackson 2009).

Capitalists defend private ownership only as part of the commodification of capital, which in turn, allows trade to occur. Capitalists are not committed to an especially land-oriented approach to private property, as in feudalism, which through, say, inheritance laws restricts the flow of capital in order to stabilise the social order. To be sure, capitalism requires that traders know who owns what at any given time, which in turn supports clear ownership signals. However, capitalism flourishes only if the traders are inclined to part with what they already own to acquire something else. After all, wealth cannot grow if capital doesn’t circulate. The state thus serves capitalism by removing the barriers that lead people to accept too easily their current status as an adaptive response to situations that they regard as unchangeable. Thus, liberalism, the movement most closely aligned with the emerging capitalist sensibility, was originally called ‘radical’—from the Latin for ‘root’—as it promised to organize society according to humanity’s fundamental nature, the full expression of which was impeded by existing regimes, which failed to allow everyone what by the twentieth century would be called ‘equal opportunity’ in life (Halevy 1928).

I offer this more rounded picture of the normative agenda of free market thinkers because Baker and Oreskes engage in a rhetorical sleight of hand associated with the capitalists’ original foes, the mercantilists. It involves presuming that the public interest is best served by state authorised producers (of whatever). Indeed, when one speaks of the early modern period in Europe as the ‘Age of Absolutism’, this elision of the state and the public is an important part of what is meant. True to its Latin roots, the ‘state’ is the anchor of stability, the stationary frame of reference through which everything else is defined. Here one immediately thinks of Newton, but metaphysically more relevant was Hobbes whose absolutist conception of the state aimed to incarnate the Abrahamic deity in human form, the literal body of which is the body politic.

Setting aside the theology, mercantilism in practice aimed to reinvent and rationalize the feudal order for the emerging modern age, one in which ‘industry’ was increasingly understood as not a means to an end but an end in itself—specifically, not simply a means to extract the fruits of nature but an expression of human flourishing. Thus, political boundaries on maps started to be read as the skins of superorganisms, which by the nineteenth century came to be known as ‘nation-states’. In that case, the ruler’s job was not simply to keep the peace over what had been largely self-managed tracts of land, but rather to ‘organize’ them so that they functioned as a single productive unit, what we now call the ‘economy’, whose first theorization was as ‘physiocracy’. The original mercantilist policy involved royal licenses that assigned exclusive rights to a ‘domain’ understood in a sense that was not restricted to tracts of land, but extended to wealth production streams in general. To be sure, over time these rights were attenuated into privileges and subsidies, which allowed for some competition but typically on an unequal basis.

In contrast, capitalism’s ‘liberal’ sensibility was about repurposing the state’s power to prevent the rise of new ‘path dependencies’ in the form of, say, a monopoly in trade based on an original royal license renewed in perpetuity, which would only serve to reduce the opportunities of successive generations. It was an explicitly anti-feudal policy. The final frontier to this policy sensibility is academia, which has long been acknowledged to be structured in terms of what Robert Merton called the principle of ‘cumulative advantage’, the sources of which are manifold and, to a large extent, mutually reinforcing. To list just a few: (1) state licenses issued to knowledge producers, starting with the Charter of the Royal Society of London, which provided a perpetually protected space for a self-organizing community to do as they will within originally agreed constraints; (2) Kuhn-style paradigm-driven normal science, which yields to a successor paradigm only out of internal collapse, not external competition; (3) the anchoring effect of early academic training on subsequent career advancement, ranging from jobs to grants; (4) the evaluation of academic work in terms of a peer review system whose remit extends beyond catching errors to judging relevance to preferred research agendas; (5) the division of knowledge into ‘fields’ and ‘domains’, which supports a florid cartographic discourse of ‘boundary work’ and ‘boundary maintenance’.

The list could go on, but the point is clear to anyone with eyes to see: Even in these neo-liberal times, academia continues to present its opposition to neo-liberalism in the sort of neo-feudal terms that would have pleased a mercantilist. Lineage is everything, whatever the source of ancestral entitlement. Merton’s own attitude towards academia’s multiple manifestations of ‘cumulative advantage’ seemed to be one of ambivalence, though as a sociologist he probably wasn’t sufficiently critical of the pseudo-liberal spin put on cumulative advantage as the expression of the knowledge system’s ‘invisible hand’ at work—which seems to be Baker and Oreskes’ default position as defenders of the scientific status quo. However, their own Harvard colleague, Alex Csiszar (2017) has recently shown that Merton recognized that the introduction of the scientometrics in the 1960s—in the form of the Science Citation Index—made academia susceptible to a tendency that he had already identified in bureaucracies, ‘goal displacement’, whereby once a qualitative goal is operationalized in terms of a quantitative indicator, there is an incentive to work toward the indicator, regardless of its actual significance for achieving the original goal. Thus, the cumulative effect of high citation counts become surrogates for ‘truth’ or some other indicator-transcendent goal. In this real sense, what is at best the wisdom of the scientific crowd is routinely mistaken for an epistemically luminous scientific consensus.

As I pointed out in Fuller (2017), which initiated this recent discussion of ‘science as game’, a great virtue of the game idea is its focus on the reversibility of fortunes, as each match matters, not only to the objective standing of the rival teams but also to their subjective sense of momentum. Yet, from their remarks about intelligent design theory, Baker and Oreskes appear to believe that the science game ends sooner than it really does: After one or even a series of losses, a team should simply pack it in and declare defeat. Here it is worth recalling that the existence of atoms and the relational character of space-time—two theses associated with Einstein’s revolution in physics—were controversial if not deemed defunct for most of the nineteenth century, notwithstanding the problems that were acknowledged to exist in fully redeeming the promises of the Newtonian paradigm. Indeed, for much of his career, Ernst Mach was seen as a crank who focussed too much on the lost futures of past science, yet after the revolutions in relativity and quantum mechanics his reputation flipped and he became known for his prescience. Thus, the Vienna Circle that spawned the logical positivists was named in Mach’s honour.

Similarly intelligent design may well be one of those ‘controversial if not defunct’ views that will be integral to the next revolution in biology, since even biologists whom Baker and Oreskes probably respect admit that there are serious explanatory gaps in the Neo-Darwinian synthesis.[1] That intelligent design advocates have improved the scientific character of their arguments from their creationist origins—which I am happy to admit—is not something for the movement’s opponents to begrudge. Rather it shows that they learn from their mistakes, as any good team does when faced with a string of losses. Thus, one should expect an improvement in their performance. Admittedly these matters become complicated in the US context, since the Constitution’s separation of church and state has been interpreted in recent times to imply the prohibition of any teaching material that is motivated by specifically religious interests, as if the Founding Fathers were keen on institutionalising the genetic fallacy! Nevertheless, this blinkered interpretation has enabled the likes of Baker and Oreskes to continue arguing with earlier versions of ‘intelligent design creationism’, very much like generals whose expertise lies in having fought the previous war. But luckily, an increasingly informed public is not so easily fooled by such epistemically rearguard actions.

References

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

Collins, Harry, Robert Evans, Martin Weinel. “STS as Science or Politics?” Social Studies of Science.  47, no. 4 (2017): 580–586.

Csiszar, Alex. “From the Bureaucratic Virtuoso to Scientific Misconduct: Robert K. Merton, Robert and Eugene Garfield, and Goal Displacement in Science.” Paper delivered to annual meeting of the History of Science Society. Toronto: 9-12 November 2017.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2000.

Fuller, Steve. “Is STS All Talk and No Walk?” EASST Review 36 no. 1 (2017): https://easst.net/article/is-sts-all-talk-and-no-walk/.

Gellner, Ernest. Words and Things. London: Routledge, 1959.

Halevy, Elie. The Growth of Philosophic Radicalism. London: Faber and Faber, 1928.

Jackson, Ben. “At the Origins of Neo-Liberalism: The Free Economy and the Strong State, 1930-47.” Historical Journal 53, no. 1 (2010): 129-51.

Latour, Bruno. “Where are the Missing Masses? The Sociology of a Few Mundane Artefacts.” In Shaping Technology/Building Society, edited by Wiebe E. Bijker and John Law, 225-258. Cambridge MA: MIT Press. 1992

Latour, Bruno. ‘Why has critique run out of steam? From matters of fact to matters of concern’. Critical Inquiry 30, no. 2 (2004): 225–248.

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

Rothschild, Emma. Economic Sentiments. Cambridge MA: Harvard University Press, 2002.

Serres, Michel. and Bruno Latour. Conversations on Science, Culture, and Time. Ann Arbor: University of Michigan Press, 1995.

Schaffer, Simon and Steven Shapin. Leviathan and the Air-Pump. Princeton: Princeton University Press, 1985.

Sismondo, Sergio. “Not a Very Slippery Slope: A Reply to Fuller.” EASST Review 36, no. 2 (2017): https://easst.net/article/not-a-very-slippery-slope-a-reply-to-fuller/.

[1] Surprisingly for people who claim to be historians of science, Baker and Oreskes appear to have fallen for the canard that only Creationists mention Darwin’s name when referring to contemporary evolutionary theory. In fact, it is common practice among historians and philosophers of science to invoke Darwin to refer to his specifically purposeless conception of evolution, which remains the default metaphysical position of contemporary biologists—albeit one maintained with increasing conceptual and empirical difficulty. Here it is worth observing that such leading lights of the Discovery Institute as Stephen Meyer and Paul Nelson were trained in the history and philosophy of science, as was I.

Author Information:Erik Baker and Naomi Oreskes, Harvard University, ebaker@g.harvard.edu, oreskes@fas.harvard.edu

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.”[1] Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3FB

Please refer to:

Image credit: Walt Stoneburner, via flickr

In late April, 2017, the voice of a once-eminent institution of American democracy issued a public statement that embodied the evacuation of norms of truth and mutual understanding from American political discourse that since the 2016 presidential election has come to be known as “post-truth.” We aren’t talking about Donald Trump, whose habitual disregard of factual knowledge is troubling, to be sure, and whose advisor, Kellyanne Conway, made “alternative facts” part of the lexicon. Rather, we’re referring to the justification issued by New York Times opinion page editor James Bennet in defense of his decision to hire columnist Bret Stephens, a self-styled “climate agnostic,” and his spreading talking points of the fossil fuel industry-funded campaign to cast doubt on the scientific consensus on climate change and the integrity of climate scientists.[2] The notion of truth made no appearance in Bennet’s statement. “If all of our columnists and all of our contributors and all of our editorials agreed all the time,” he explained, “we wouldn’t be promoting the free exchange of ideas, and we wouldn’t be serving our readers very well.”[3] The intellectual merits of Stephens’ position are evidently not the point. What counts is only the ability to grease the gears of the “free exchange of ideas.”

Bennet’s defense exemplifies the ideology of the “marketplace of ideas,” particularly in its recent, neoliberal incarnation. Since the 1970s, it has become commonplace throughout much of Europe and America to evince suspicion of attempts to build public consensus about facts or values, regardless of motivation, and to maintain that the role of public-sphere institutions—including newspapers and universities—is simply to place as many private opinions as possible into competition (“free exchange”) with one another.[4] If it is meaningful to talk about a “post-truth” moment, this ideological development is surely among its salient facets. After all, “truth” has not become any more or less problematic as an evaluative concept in private life, with its countless everyday claims about the world. Only public truth claims, especially those with potential to form a basis for collective action, now seem newly troublesome. To the extent that the rise of “post-truth” holds out lessons for science studies, it is not because the discipline has singlehandedly swung a wrecking ball through conventional epistemic wisdom (as some practitioners would perhaps like to imagine[5]), but because the broader rise of marketplace-of-ideas thinking has infected even some of its most subversive-minded work.

Science as Game

In this commentary, we address and critique a concept commonly employed in theoretical science studies that is relevant to the contemporary situation: science as game. While we appreciate both the theoretical and empirical considerations that gave rise to this framework, we suggest that characterizing science as a game is epistemically and politically problematic. Like the notion of a broader marketplace of ideas, it denies the public character of factual knowledge about a commonly accessible world. More importantly, it trivializes the significance of the attempt to obtain information about that world that is as right as possible at a given place and time, and can be used to address and redress significant social issues. The result is the worst of both worlds, permitting neither criticism of scientific claims with any real teeth, nor the possibility of collective action built on public knowledge.[6] To break this stalemate, science studies must become more comfortable using concepts like truth, facts, and reality outside of the scare quotes to which they are currently relegated, and accepting that the evaluation of knowledge claims must necessarily entail normative judgments.[7]

Philosophical talk of “games” leads directly to thoughts of Wittgenstein, and to the scholar most responsible for introducing Wittgenstein to science studies, David Bloor. While we have great respect for Bloor’s work, we suggest that it carries uncomfortable similarities between the concept of science as a game in science studies and the neoliberal worldview. In his 1997 Wittgenstein, Rules and Institutions, Bloor argues for an analogy between his interpretation of the later Wittgenstein’s theory of meaning (central to Bloor’s influential writing on science) and the theory of prices of the neoliberal pioneer Ludwig von Mises. “The notion of the ‘real meaning’ of a concept or a sign deserves the same scorn as economists reserve for the outdated and unscientific notion of the ‘real’ or ‘just’ price of a commodity,” Bloor writes. “The only real price is the price paid in the course of real transactions as they proceed von Fall zu Fall. There is no standard outside these transactions.”[8] This analogy is the core of the marketplace of ideas concept, as it would later be developed by followers of von Mises, particularly Friedrich von Hayek. Just as there is no external standard of value in the world of commodities, there is no external standard of truth, such as conformity to an empirically accessible reality, in the world of science.[9] It is “scientism” (a term that von Hayek popularized) to invoke support for scientific knowledge claims outside of the transactions of the marketplace of ideas. Just as, for von Hayek and von Mises, the notion of economic justice falls in the face of the wisdom of the marketplace, so too does the notion of truth, at least as a regulative ideal to which any individual or finite group of people can sensibly aspire.

Contra Bloor (and von Hayek), we believe that it is imperative to think outside the sphere of market-like interactions in assessing both commodity prices and conclusions about scientific concepts. The prices of everything from healthcare and housing to food, education and even labor are hot-button political and social issues precisely because they affect people’s lives, sometimes dramatically, and because markets do not, in fact, always values these goods and services appropriately. Markets can be distorted and manipulated. People may lack the information necessary to judge value (something Adam Smith himself worried about). Prices may be inflated (or deflated) for reasons that bear little relation to what people value. And, most obviously in the case of environmental issues, the true cost of economic activity may not be reflected in market prices, because pollution, health costs, and other adverse effects are externalized. There is a reason why Nicholas Stern, former chief economist of the World Bank, has called climate change the “greatest market failure ever seen.”[10] Markets can and do fail. Prices do not always reflect value. Perhaps most important, markets refuse justice and fairness as categories of analysis. As Thomas Piketty has recently emphasized, capitalism typically leads to great inequalities of wealth, and this can only be critiqued by invoking normative standards beyond the values of the marketplace.[11]

External normative standards are indispensable in a world where the outcome of the interactions within scientific communities matter immensely to people outside those communities. This requirement functions both in the defense of science, where appropriate, and the critique of it.[12] The history of scientific racism and sexism, for example, speaks to the inappropriateness of public deference to all scientific claims, and the necessity of principled critique.[13] Yet, the indispensability of scientific knowledge to political action in contemporary societies also demands the development of standards that justify public acceptance of certain scientific claims as definitive enough to ground collective projects, such as the existence of a community-wide consensus or multiple independent lines of evidence for the same conclusion.[14] (Indeed, we regard the suggestion of standards for the organization of scientific communities by Helen Longino as one of the most important contributions of the field of social epistemology.[15])

Although we reject any general equivalency between markets and scientific communities, we agree they are indeed alike in one key way: they both need regulation. As Jürgen Habermas once wrote in critique of Wittgenstein, “language games only work because they presuppose idealizations that transcend any particular language game; as a necessary condition of possibly reaching understanding, these idealizations give rise to the perspective of an agreement that is open to criticism on the basis of validity claims.”[16] Collective problem-solving requires that these sorts of external standards be brought to bear. The example of climate change illustrates our disagreement with Bloor (and von Mises) on both counts in one fell swoop. Though neither of us is a working economist, we nonetheless maintain that it is rational—on higher-order grounds external to the social “game” of the particular disciplines—for governments to impose a price on carbon (i.e., a carbon tax or emissions trading system), in part because we accept that the natural science consensus on climate change accurately describes the physical world we inhabit, and the social scientific consensus that a carbon pricing system could help remedy the market failure that is climate change.[17]

Quietism and Critique

We don’t want to unfairly single out Bloor. The science-as-game view—and its uncomfortable resonances with marketplace-of-ideas ideology—crops up in the work of many prominent science studies scholars, even some who have quarreled publicly with Bloor and the strong programme. Bruno Latour, for example, one of Bloor’s sharpest critics, draws Hayekian conclusions from different methodological premises. While Bloor invokes social forces to explain the outcome of scientific games,[18] Latour rejects the very idea of social forces. Rather, he claims, as Margaret Thatcher famously insisted, that “there is no such thing as ‘the social’ or ‘a society.’”[19] But whereas Thatcher at least acknowledged the existence of family, for Latour there are only monadic actants, competing “agonistically” with each other until order spontaneously emerges from the chaos, just as in a game of Go (an illustration of which graces the cover of his seminal first book Laboratory Life, with Steve Woolgar).[20] Social structures, evaluative norms, even “publics,” in his more recent work, are all chimeras, devoid of real meaning until this networked process has come to fulfillment. If that view might seem to make collective action for wide-reaching social change difficult to conceive, Latour agrees: “Seen as networks, … the modern world … permits scarcely anything more than small extensions of practices, slight accelerations in the circulation of knowledge, a tiny extension of societies, miniscule increases in the number of actors, small modifications of old beliefs.”[21] Rather than planning political projects with any real vision or bite—or concluding that a particular status-quo might be problematic, much less illegitimate—one should simply be patient, play the never-ending networked game, and see what happens.[22] But a choice for quietism is a choice nonetheless—“we are condemned to act,” as Immanuel Wallerstein once put it—one that supports and sustains the status quo.[23] Moreover, a sense of humility or fallibility by no means requires us to exaggerate the inevitability of the status quo or yield to the power of inertia.[24]

Latour has at least come clean about his rejection of any aspiration to “critique.”[25] But others who haven’t thrown in the towel have still been led into a similar morass by their commitment to a marketlike or playful view of science. The problem is that, if normative judgments external to the game are illegitimate, analysts are barred from making any arguments for or against particular views or practices. Only criticism of their premature exclusion from the marketplace is permitted. This standpoint interprets Bloor’s famous call for symmetry not so much as a methodological principle in intellectual analysis, but as a demand for the abandonment of all forms of epistemic and normative judgment, leading to the bizarre sight of scholars championing a widely-criticized “scientific” or intellectual cause while coyly refusing to endorse its conclusions themselves. Thus we find Bruno Latour praising the anti-environmentalist Breakthrough Institute while maintaining that he “disagrees with them all the time;” Sheila Jasanoff defending the use of made-to-order “litigation science” in courtrooms on the grounds of a scrupulous “impartiality” that rejects scholarly assessments of intellectual integrity or empirical adequacy in favor of letting “the parties themselves do more of the work of demarcation;” and Steve Fuller defending creationists’ insistence that their views should be taught in American science classrooms while remaining ostensibly “neutral” on the scientific question at issue.[26]

Fuller’s defense of creationism, in particular, shows the way that calls for “impartiality” are often in reality de facto side-taking: Fuller takes rhetorical tropes directly out of the creationist playbook, including his tendentious and anachronistic labelling of modern evolutionary biologists as “Darwinists.” Moreover, despite his explicit endorsement of the game view of science, Fuller refuses to accept defeat for the intelligent design project, either within the putative game of science, or in the American court system, which has repeatedly found the teaching of creationism to be unconstitutional. Moreover, Fuller’s insistence that creationism somehow has still not received a “fair run for its money” reveals that even he cannot avoid importing external standards (in this case fairness) to evaluate scientific results! After all, who ever said that science was fair?

In short, science studies scholars’ ascetic refusal of standards of good and bad science in favor of emergent judgments immanent to the “games” they analyze has vitiated critical analysis in favor of a weakened proceduralism that has struggled to resist the recent advance of neoliberal and conservative causes in the sciences. It has led to a situation where creationism is defended as an equally legitimate form of science, where the claims of think tanks that promulgate disinformation are equated with the claims of academic scientific research institutions, and corporations that have knowingly suppressed information pertinent to public health and safety are viewed as morally and epistemically equivalent to the plaintiffs who are fighting them. As for Fuller, leaving the question of standards unexamined and/ or implicit, and relying instead on the rhetoric of the “game,” enables him to avoid the challenge of defending a demonstrably indefensible position on its actual merits.

Where the Chips Fall

In diverse cases, key evaluative terms—legitimacy, disinformation, precedent, evidence, adequacy, reproducibility, natural (vis-à-vis supernatural), and yes, truth—have been so relativized and drained of meaning that it starts to seem like a category error even to attempt to refute equivalency claims. One might argue that this is alright: as scholars, we let the chips fall where they may. The problem, however, is that they do not fall evenly. The winner of this particular “game” is almost always status quo power: the conservative billionaires, fossil fuel companies, lead and benzene and tobacco manufacturers and others who have bankrolled think tanks and “litigation science” at the cost of biodiversity, human health and even human lives.[27] Scientists paid by the lead industry to defend their toxic product are not just innocently trying to have their day in court; they are trying to evade legal responsibility for the damage done by their products. The fossil fuel industry is not trying to advance our understanding of the climate system; they are trying to block political action that would decrease societal dependence on their products. But there is no way to make—much less defend—such claims without a robust concept of evidence.

Conversely, the communities, already victimized by decades of poverty and racial discrimination, who rely on reliable science in their fight for their children’s safety are not unjustly trying to short-circuit a process of “demarcation” better left to the adversarial court system.[28] It is a sad irony that STS, which often sees itself as championing the subaltern, has now in many cases become the intellectual defender of those who would crush the aspirations of ordinary people.

Abandoning the game view of science won’t require science studies scholars to reinvent the wheel, much less re-embrace Comtean triumphalism. On the contrary, there are a wide variety of perspectives from the history of epistemology, philosophy of science, and feminist, anti-racist, and anti-colonialist theory that permit critique that can be both epistemic and moral. One obvious source, championed by intellectual historians such as James Kloppenberg and philosophers such as Hilary Putnam and Jürgen Habermas, is the early American pragmatism of John Dewey and William James, a politically constructive alternative to both naïve foundationalism and the textualist rejection of the concept of truth found in the work of more recent “neo-pragmatists” like Richard Rorty.[29] Nancy Cartwright, Thomas Uebel, and John O’Neill have similarly reminded us of the intellectual and political potential in the (widely misinterpreted, when not ignored) “left Vienna Circle” philosophy of Otto Neurath.[30]

In a slightly different vein, Charles Mills, inspired in part by the social science of W.E.B. Du Bois, has insisted on the importance of a “veritistic” epistemological stance in characterizing the ignorance produced by white supremacy.[31] Alison Wylie has emphasized the extent to which many feminist critics of science “are by no means prepared to concede that their accounts are just equal but different alternatives to those they challenge,” but in fact often claim that “research informed by a feminist angle of vision … is simply better in quite conventional terms.”[32] Steven Epstein’s work on AIDS activism demonstrates that social movements issuing dramatic challenges to biomedical and scientific establishments can make good use of unabashed claims to genuine knowledge and “lay” expertise. Epstein’s work also serves as a reminder that moral neutrality is not the only, much less the best, route to rigorous scholarship.[33] Science studies scholars could also benefit from looking outside their immediate disciplinary surroundings to debates about poststructuralism in the analysis of (post)colonialism initiated by scholars like Benita Parry and Masao Miyoshi, as well as the emerging literature in philosophy and sociology about the relationship of the work of Michel Foucault to neoliberalism.[34]

For our own part, we have been critically exploring the implications of the institutional and financial organization of science during the Cold War and the recent neoliberal intensification of privatization in American society.[35] We think that this work suggests a further descriptive inadequacy in the science-as-game view, in addition to the normative inadequacies we have already described. In particular, it drives home the extent to which the structure of science is not constant. From the longitudinal perspective available to history, as opposed to sociological or ethnographic snapshot, it is possible to resolve the powerful societal forces—government, industry, and so on—driving changes in the way science operates, and to understand the way those scientific changes relate to broader political-economic imperatives and transformations. Rather than throwing up one’s hands and insisting that incommensurable particularity is all there is, science studies scholars might instead take a theoretical position that will allow us to characterize and respond to the dramatic transformations of academic work that are happening right now, and from which the humanities are by no means exempt.[36]

Academics must not treat themselves as isolated from broader patterns of social change, or worse, deny that change is a meaningful concept outside of the domain of microcosmic fluctuations in social arrangements. Powerful reactionary forces can reshape society and science (and reshape society through science) in accordance with their values; progressive movements in and outside of science have the potential to do the same. We are concerned that the “game” view of science traps us instead inside a Parmenidean field of homogenous particularity, an endless succession of games that may be full of enough sound and fury to interest scholars but still signify nothing overall.

Far from rendering science studies Whiggish or simply otiose, we believe that a willingness to discriminate, outside of scare quotes, between knowledge and ignorance or truth and falsity is vital for a scholarly agenda that respects one of the insights that scholars like Jasanoff have repeatedly and compellingly championed: in contemporary democratic polities, science matters. In a world where physicists state that genetic inferiority is the cause of poverty among black Americans, where lead paint manufacturers insist that their product does no harm to infants and children, and actresses encourage parents not to vaccinate their children against infectious diseases, an inability to discriminate between information and disinformation—between sense and nonsense (as the logical positivists so memorably put it)—is not simply an intellectual failure. It is a political and moral failure as well.

The Brundtland Commission famously defined “sustainable development” as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Like the approach we are advocating here, this definition treats the empirical and the normative as enfolded in one another. It sees them not as constructions that emerge stochastically in the fullness of time, but as questions that urgently demand robust answers in the present. One reason science matters so much in the present moment is its role in determining which activities are sustainable, and which are not. But if scientists are to make such judgments, then we, as science studies scholars, must be able to judge the scientists—positively as well as critically. Lives are at stake. We are not here merely to stand on the sidelines insisting that all we can do is ensure that all voices are heard, no matter how silly, stupid, or nefarious.

[1] We would like to thank Robert Proctor, Mott Greene, and Karim Bschir for reading drafts and providing helpful feedback on this piece.

[2] For an analysis of Stephens’ column, see Robert Proctor and Steve Lyons, “Soft Climate Denial at The New York Times,” Scientific American, May 8, 2017; for the history of the campaign to cast doubt on climate change science, see Naomi Oreskes and Erik M. Conway, Merchants of Doubt (Bloomsbury Press, 2010); for information on the funding of this campaign, see in particular Robert J. Bruelle, “Institutionalizing delay: foundation funding and the creation of U.S. climate change counter-movement organizations,” Climatic Change 122 (4), 681–694, 2013.

[3] Accessible at https://twitter.com/ErikWemple/status/858737313601507329.

[4] For the recency of the concept, see Stanley Ingber, “The Marketplace of Ideas: A Legitimizing Myth,” Duke Law Journal, February 1984. The significance of the epistemological valorization of the marketplace of ideas to the broader neoliberal project has been increasingly well-understood by historians of neoliberalism; it is an emphasis, for instance, to the approach taken by the contributors to Philip Mirowski and Dieter Plehwe, eds., The Road from Mont Pèlerin (Harvard, 2009), especially Mirowski’s “Postface.”

[5] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[6] See for instance John Ziman, Public Knowledge: An Essay Concerning the Social Dimension of Science (Cambridge University Press, 1968); as well as the many more recent perspectives we hold up below as exemplary of alternative approaches.

[7] Naomi Oreskes and Erik M. Conway. “Perspectives on global warming: A Book Symposium with Steven Yearley, David Mercer, and Andy Pitman.” Metascience vol. 21, pp. 531-559, 2012.

[8] David Bloor, Wittgenstein, Rules and Institutions (Routledge, 1997), pp. 76-77.

[9] As suggested by Helen Longino in The Future of Knowledge (Princeton University Press, 2001) as an alternative to the more vexed notion of “correspondence,” wrought with metaphysical difficulties Longino hopes to skirt. In Austrian economics, this rejection of the search for empirical, factual knowledge initially took the form, in von Mises’ thought, of the ostensibly purely deductive reasoning he called “praxaeology,” which was supposed to analytically uncover the imminent principles governing the economic game. Von Hayek went further, arguing that economics at its most rigorous merely theoretically explicates the limits of positive knowledge about empirical social realities. See, for instance, Friedrich von Hayek, “On Coping with Ignorance,” Ludwig von Mises Lecture, 1978.

[10] Nicholas H. Stern, The Economics of Climate Change: The Stern Review (Cambridge University Press, 2007).

[11] Thomas Piketty, Capital in the Twenty-First Century (Harvard/Belknap, 2013). In addition to critiquing market outcomes, philosophers have also invoked concepts of justice and fairness to challenge the extension of markets to new domains; see for example Michael Sandel, What Money Can’t Buy: The Moral Limits of Markets (Farrar, Straus, and Giroux, 2013) and Harvey Cox, The Market as God (Harvard University Press, 2016). This is also a theme in the Papal Encyclical on Climate Change and Inequality, Laudato Si. https://laudatosi.com/watch

[12] For more on this point, see Naomi Oreskes, “Systematicity is Necessary but Not Sufficient: On the Problem of Facsimile Science,” in press, Synthèse.

[13] See among others Helen Longino, Science as Social Knowledge (Princeton University Press, 1990); Londa Schiebinger, Has Feminism Changed Science? (Harvard University Press, 1999); Sandra Harding, Science and Social Inequality: Feminist and Postcolonial Issues (University of Illinois Press, 2006); Donna Haraway, Primate Visions: Gender, Race, and Nature in the World of Modern Science (Routledge, 1989); Evelynn Hammonds and Rebecca Herzig, The Nature of Difference: Sciences of Race in the United States from Jefferson to Genomics (MIT Press, 2008).

[14] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Naomi Oreskes, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?” in Joseph F. C. DiMento and Pamela Doughman, eds., Climate Change: What It Means for Us, Our Children, and Our Grandchildren (MIT Press, 2007), pp. 65-99.

[15] Helen Longino, Science as Social Knowledge (Princeton University Press, 1990), and The Future of Knowledge (Princeton University Press, 2001).

[16] Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984), p. 199.

[17] See, for instance, Naomi Oreskes, “Without government, the market will not solve climate change: Why a meaningful carbon tax may be our only hope,” Scientific American (December 22, 2015), Naomi Oreskes and Jeremy Jones, “Want to protect the climate? Time for carbon pricing,” Boston Globe (May 3, 2017).

[18] Along with a purportedly empirical component that, as Latour has compellingly argued, is “canceled out” out of the final analysis because of its common presence to both parties in a dispute. See Bruno Latour, “For Bloor and Beyond: a Reply to David Bloor’s Anti-Latour,” Studies in History and Philosophy of Science, vol. 30 (1), pp.113-129, March 1998.

[19] Bruno Latour, Reassembling the Social: An Introduction to Actor-Network Theory (Oxford University Press, 2007), p. 5; this theme is an emphasis of his entire oeuvre. On Thatcher, see http://briandeer.com/social/thatcher-society.htm and James Meek, Private Island (Verso, 2014).

[20] Bruno Latour and Steve Woolgar, Laboratory Life: The Construction of Scientific Facts (Routledge, 1979/1986); Bruno Latour, Science in Action (Harvard University Press, 1987). In Laboratory Life this emergence of order from chaos is explicitly analyzed as the outcome of a kind of free market in scientific “credit.” Spontaneous order is one of the foundational themes of Hayekian thought, and the game of Go is an often-employed analogy there as well. See, for instance, Peter Boettke, “The Theory of Spontaneous Order and Cultural Evolution in the Social Theory of F.A. Hayek,” Cultural Dynamics, vol. 3 (1), pp. 61-83, 1990; Gustav von Hertzen, The Spirit of the Game (CE Fritzes AB, 1993), especially chapter 4.

[21] Bruno Latour, We Have Never Been Modern (Harvard University Press, 1993), pp. 47-48; for his revision of the notion of the public, see for example Latour’s Politics of Nature (Harvard University Press, 2004). For a more in-depth discussion of Latour vis-à-vis neoliberalism, see Philip Mirowski, “What Is Science Critique? Part 1: Lessig, Latour,” keynote address to Workshop on the Changing Political Economy of Research and Innovation, UCSD, March 2015.

[22] Our criticism here is not merely hypothetical. Latour’s long-time collaborator Michel Callon and the legal scholar David S. Caudill, for example, have both used Latourian actor-network theory to argue that critics of the privatization of science such as Philip Mirowski are mistaken and analysts should embrace, or at least concede the inevitability of, “hybrid” science that responds strongly to commercial interests. See Michel Callon, “From Science as an Economic Activity to Socioeconomics of Scientific Research,” in Philip Mirowski and Esther-Mirjam Sent, eds. Science Bought and Sold (University of Chicago Press, 2002); and David S. Caudill, “Law, Science, and the Economy: One Domain?” UC Irvine Law Review vol. 5 (393), pp. 393-412, 2015.

[23] Immanuel Wallerstein, The Essential Wallerstein (The New Press, 2000), p. 432.

[24] Naomi Oreskes, “On the ‘reality’ and reality of anthropogenic climate change,” Climatic Change vol. 119, pp. 559-560, 2013, especially p. 560 n. 4. Many philosophers have made this point. Hilary Putnam, for example, has argued that fallibilism actually demands a critical attitude, one that seeks to modify beliefs for which there is sufficient evidence to believe that they are mistaken, while also remaining willing to make genuine knowledge claims on the basis of admittedly less-than-perfect evidence. See his Realism with a Human Face (Harvard University Press, 1990), and Pragmatism: An Open Question (Oxford, 1995) in particular.

[25] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[26] “Bruno Latour: Modernity is a Politically Dangerous Goal,” November 2014 interview with Latour by Patricia Junge, Colombina Schaeffer and Leonardo Valenzuela of Verdeseo; Zoë Corbyn, “Steve Fuller : Designer trouble,” The Guardian (January 31, 2006); Sheila Jasanoff, “Representation and Re-Presentation in Litigation Science,” Environmental Health Perspectives 116(1), pp. 123–129, January 2008. Fuller also has a professional relationship with the Breakthrough Institute, but the Institute seems somewhat fonder, in their publicity materials, of their connection with Latour.

[27] Even creationism, it’s worth remembering, is a big-money movement. The Discovery Institute, perhaps the most prominent “intelligent design” advocacy organization, is bankrolled largely by wealthy Republican donors, and was co-founded by notorious Reaganite supply-side economics guru and telecom deregulation champion George Gilder. See Jodi Wilgoren, “Politicized Scholars Put Evolution on the Defensive,” New York Times, August 21, 2005. Similarly, so-called grassroots anti-tax organizations often had links to the tobacco industry. See http://www.sourcewatch.org/index.php/Americans_for_Tax_Reform_and_Big_Tobacco The corporate exploitation of ambiguity about the contours of disinformation can, of course, also take more anodyne forms, as in manipulative use of phrases like “natural flavoring” on food packaging. We thank Mott Greene for this example.

[28] David Rosner and Gerald Markowitz, Lead Wars: The Politics of Science and the Fate of America’s Children (University of California Press, 2013). See also Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution (University of California Press, 2nd edition 2013); and Stanton Glantz, ed., The Cigarette Papers (University of California Press, 1998).

[29] See James Kloppenburg, “Pragmatism: An Old Name for Some New Ways of Thinking?,” The Journal of American History, Vol. 83 (1), pp. 100-138, June 1996, which argues that Rorty misrepresents in many ways the core insights of the early pragmatists. See also Jürgen Habermas, Theory of Communicative Action (Beacon Press, vol. 1 1984, vol. 2 1987); Hilary Putnam, Reason, Truth, and History (Cambridge University Press, 1981); see also William Rehg’s development of Habermas’s ideas on science in Cogent Science in Context: The Science Wars, Argumentation Theory, and Habermas (MIT Press, 2009).

[30] Nancy Cartwright, Jordi Cat, Lola Fleck, and Thomas Uebel, Otto Neurath: Philosophy between Science and Politics (Cambridge University Press, 1996); Thomas Uebel, “Political philosophy of science in logical empiricism: the left Vienna Circle,” Studies in History and Philosophy of Science, vol. 36, pp. 754-773, 2005; John O’Neill, “Unified science as political philosophy: positivism, pluralism and liberalism,” Studies in History and Philosophy of Science, vol. 34, pp. 575-596, 2003.

[31] Charles Mills, “White Ignorance,” in Robert Proctor and Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (Stanford University Press, 2008); see also his recent Black Rights/White Wrongs (Oxford University Press, 2017).

[32] Alison Wylie, Thinking from Things: Essays in the Philosophy of Archaeology (University of California Press, 2002), p. 190. Helen Longino (Science as Social Knowledge, 1999) and Sarah Richardson (Sex Itself, University of Chicago Press, 2013), have made similar arguments about research in endocrinology and genetics.

[33] Steven Epstein, Impure Science (University of California Press, 1996); see especially pp. 13-14.

[34] See for instance Benita Parry, Postcolonial Studies: A Materialist Critique (Routledge, 2004); Masao Miyoshi, “Ivory Tower in Escrow,” boundary 2, vol. 27 (1), pp. 7-50, Spring 2000. On Foucault, see recently Daniel Zamora and Michael C. Behrent, eds., Foucault and Neoliberalism (Polity Press, 2016); but note also the seeds of this critique in earlier works such as Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984) and Nancy Fraser, “Michel Foucault: A ‘Young Conservative’?”, Ethics vol 96 (1), pp. 165-184, 1985, and “Foucault on Modern Power: Empirical Insights and Normative Confusions,” Praxis International, vol. 3, pp. 272-287, 1981.

[35] Naomi Oreskes and John Krige, eds., Science and Technology in the Global Cold War (MIT Press, 2015); Naomi Oreskes, Science on a Mission: American Oceanography in the Cold War (University of Chicago Press, forthcoming); Erik Baker, “The Ultimate Think Tank: Money and Science at the Santa Fe Institute,” manuscript in preparation.

[36] See, for instance, Philip Mirowski, Science-Mart (Harvard University Press, 2010); Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (MIT Press, 2015); Henry Giroux, Neoliberalism’s War on Higher Education (Haymarket Books, 2014); Sophia McClennen, “Neoliberalism and the Crisis of Intellectual Engagement,” Works and Days, vols. 26-27, 2008-2009.

Author Information: Amanda Phillips, Virginia Tech, akp@vt.edu

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3F9

Please refer to:

Image credit: Keith Allison, via flickr

In 2008 Major League Baseball (MLB) became the last of the four major North American professional sports leagues to introduce the use of video instant replay in reviewing close or controversial calls. Soon after, in 2014, MLB permitted team managers to challenge calls made by umpires at least once during game play. To anyone even marginally familiar with the ideology of baseball in American life, the relatively late implementation of replay technology should come as no surprise. The traditions of the sport have proven resilient against the pressures of time. Baseball’s glacial pace, ill-fitting uniforms, and tired ballpark traditions harken back to a time when America’s greatness was, perhaps, clearer. I am neither the first, nor will I be the last, to state that baseball represents an idealized national conservatism—fetishized through pining nostalgia and a cult-like devotion to individual abilities and judgment. It is a team sport for those averse to the compromises of glory inherent within the act of teamwork.

The same proves true for the judgment of umpires. Instant replay usurped their individual legitimacy as knowers and interpreters of play on the diamond. The truth of play changed with the introduction instant replay review. This goes beyond Marshal McLuhan’s reflection on the impact of instant replay on (American) football. McLuhan stated in an interview that audiences “… want to see the nature of the play. And so they’ve had to open up the play … to enable the audience to participate more fully in the process of football play.” [1]

By 2008, audiences knew how to participate in sporting events, how to adjust their voices to yell about the umpirical incompetence unfolding on screen. Instead, the introduction of review changed how truth operates within baseball. The expertise of umpires now faces the ever-present threat of challenge from both mechanical and managerial sources. Does this change, the displacement of trust in umpires, mean that baseball, like the rest of American society, has entered a regime of post-truth?

Political Post-Truth

The realities and responses to the current era of political post-truth hang heavy in the hearts of many. Steve Fuller (2017) in ‘Is STS all Talk and No Walk?’ concludes that in order to challenge the ‘deplorables’ who tout our epistemology but not our politics, we need to conceptualize our work as more of a game, a sport to be played. This argument comes out of a larger field-based conversation between Fuller and Sergio Sismondo (2017) on how STS can best respond to the post-truth world it (apparently) created.

On one hand, Sismondo looks to a future where STS researchers shore up scientific and technical institutions, or at the very least find ways to collectively defend areas once guarded by the now pariah ‘expert’.[2] On the other hand, Fuller argues that the field needs to continue its commitment to epistemic democratization—regardless of how this pursuit might upset what we understand as the social order to things. Fuller’s desire to think about scholarship as a sport serves as a call to action to recognize that our play book of challenging truth-claims might be stolen, but that does not mean that not yet imagined strategies could win the game.

Our options thus appear to be that we can retreat and reify, or innovate and outwit. While I personally find Fuller’s suggestion the more intriguing of the two, I have concerns about bringing the win-lose binary of sport to the forefront of disciplinary and research priorities. While Fuller idealizes the so-called free space of game play, rarely do teams start on the even ground to which he alludes. Take, for example, the ‘mortar kick’. [3]

In 2016 the National Football League (NFL) instituted a rule change that influenced where a ball would be placed in the event of a touchback after a kickoff.[4] The change moved the ball up five yards to the 25-yard line to encourage teams to take the touchback rather than receiving the ball and trying to run to favorable field position.

This rule was created with the explicit purpose of making kickoffs safer by incentivizing a team to not jockey for field position and risk player injury. This result was soon defeated by the New England Patriots who started utilizing mortar kicks during kickoffs. These kicks arc extremely high in the air and aim to land around the 5-yard line. The kick does two things. It forces the receiving team to catch the ball and run toward field position, and it gives the defending team additional time to get downfield to thwart the attempted run. This play, while legal, defeats the specific intentions of the rule change. The Patriots innovated game play around a barrier, but in doing so privileged strategy over safety. Such strategies are born of a crafty and vulpine spirit. Does STS want to emulate Bill Belichick and the controversy embroiled Patriots?[5]

The Cost of Winning

The mortar kick brings to light a fault with the metaphor Fuller wishes to embrace. Despite the highly structured and rule-driven orientation of sports (and science for that matter), the introduction of the mortar kick suggests that the drive to win comes at a cost—a cost that sacrifices values such as safety and integrity. We working in STS are not strangers to how values get incorporated or discarded within scientific and technical processes. But it seems odd from a research perspective that we might begin to orient ourselves towards knowingly emulating the institutional processes we analyze, criticize, and seek to understand just to come out a temporary victor in the contemporary social battlefield. There is no doubt that the current post-truth landscape poses problems for both progressive political values and epistemic claims. But I am hesitant to follow Fuller’s metaphor to its terminus if we do not have a clear sense of which team is ours.

At the risk of invoking the equivalent of a broken record in STS, what stood out to me from Latour’s 2004 article was not the waving of a white flag, but rather the suggestion of developing a critique “with multiplication, not subtraction”. While this call does not seem to have been widely embraced by our field, I think there is room to experiment. I can envision a future STS that embraces a collective multiplicity of critical thought. Let us not concern ourselves with winning, but rather a gradual overwhelming. If “normative categories of science … are moveable feasts the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties” (Fuller 2017), let us make explicitly clear what movability does and how it comes to be. Let us conceptualize labor and research more collectively so that we more thoroughly examine the many and conflicting claims to truth which we face.

If we must play a game, let us not emulate the model that academia has placed before us. This turns out to be a game that looks a whole lot like baseball—set in its ways, individualistic, and often times boring (but better with a beer in hand). Change is more disruptive in a sport reliant on tradition. But, as shown with the introduction of video review, the post-truth world makes it easier to question and challenge authority. This change can not only give rise to the deplorable but also, perhaps, the multiple. If the only way for STS to walk the walk is to the play the game, we will have to conceptualize our team—and more importantly how we work together—in more than just idioms.

References

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective (2016): http://wp.me/p1Bfg0-3nx.

Fuller, Steve. “Is STS all Talk and no Walk?” EASST Review 36, no. 1 (2017):  https://easst.net/article/is-sts-all-talk-and-no-walk/.

Latour, Bruno. “Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30, no. 2 (2004): 225–248.

Sismondo, Sergio. “Post-Truth?” Social Studies of Science 47, no. 1 (2017): 3-6.

[1] “Marshall McLuhan on Football 2.0” https://www.youtube.com/watch?time_continue=95&v=3A_O7M3PQ-o

[2] His mention of “physicians and patients” who would need to step up in the advent of FDA deregulation seems to overlook the many examples of institutions, scientific and otherwise, failing those they intend to serve. Studies looking at citizen science and activism show that it did not take the Trump administration to cause individuals to step into the role of self-advocate in the face of regulatory incompetence.

[3] http://www.sharpfootballanalysis.com/blog/2016/why-mortar-kicks-can-win-games-in-2016.

[4] A touchback occurs a kicker from defending team kicks the ball on or over the receiving teams goal line. In the event of a touchback, the ball is placed at a specified point on the field.

[5] Sorry Boston.

Author Information: Lyudmila Markova, Russian Academy of Science, markova.lyudmila2013@yandex.ru

Markova, Lyudmila. “Transhumanism in the Context of Social Epistemology.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 50-53.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3EQ

Please refer to:

Image credit: Ingmar Zahorsky, via flickr

Robert Frodeman (2015) and Steve Fuller (2017) discuss the problem of transhumanism in the context of history. This approach to transhumanism—a subject now studied actively by many specialists—clearly shows us the focus of current investigations. A virtual world, created by humans and possessing intelligence, demands a special kind of communication. But we are not the only ones changing. Humans programmed robots to perform calculations at a speed we cannot match.

Still, we remain certain that robots will never be able to feel a sense of joy or disappointment, of love or hatred. At the same time, it is difficult to deny that robots already know how to express emotions. Often, we prefer to deal with robots—they are generally polite and answer our questions quickly and clearly.

Even though we are unable to produce particular operations at a certain speed, we still use the results of the robots’ work. But the senses remain inaccessible to robots. Yet, they have learned many of the ways humans make their feelings known to others. A common ground is forming among humans and robots. People use techniques for communication with one another and with artificial intellects based on the laws of the virtual world, and artificial intellects use signs of human feelings—which are only signs and nothing more.

As we debate transhumanism as something that either awaits us, or does not, in the future we fail to consider the serious transformations of our current lives. To some extent, we are already trans-humanoids with artificial parts of our body, with the ability to change our genome, with cyber technology, with digital communication and so on. We do not consider these changes as radically transforming our future selves and our current selves. We do not notice to what extent we differ already from our children and, consequently, from the next generation.

Until recently, we relied first on our knowledge of the laws of the material world. We studied nature and our artificial world was material. Now, we study our thinking and our artificial world is not simply material—it can think and it can understand us. Its materiality, then, is of a different type. The situation in society is quite another and, in order to live in it, we have to change ourselves. Perhaps we have not noticed that the process of our becoming transhumanoids has already begun?

We have a philosophical basis for the discussions about transhumanism. It is social epistemology, where some borders disappear and others appear. Steve Fuller frequently refers to the topic of transhumanism in the context of social epistemology.

“Sociality” in Social Epistemology: The Turn in Thinking

As we speak of both the technization of humans and humanization of machines, the border between humans and technology becomes less visible. In social epistemology, the sense of “social” is important for understanding this turn in thinking during the last century. You can find without difficulty (long before the emergence of social epistemology) the adoption of phrases such as the “social” history of science, the “social” organization of scientific institutes, the “social” character of the scientific (and not only scientific) knowledge, “social” character of the work of a scientist and so on. People created science and everything associated with it is connected to our world in one way or another.

Nobody denies the existence of these relations. The problem resides in their interpretation. Even if you want to see the advantage of your position in striving to eliminate traces of the scientists’ work and conditions under which the results were obtained, you have to know what you want to eliminate and why. In social epistemology, on the contrary, sociality remains in scientific knowledge. Still, serious problems follow as a result.

It is important to understand that anything we study acquires human features because we introduce them into it. We comprehend nature (in the broadest sense of this word) not as something opposed, or even hostile, to people. We deal with a thinking world. For example, we want to have a house that protects us from rain and cold. It is enough to know physical characteristics of materials in order to build such a house. But now we can have a “smart” house. This house alerts you when you return home in the evening that there is no kefir in the fridge and the cat needs food you must buy. You like your new car, but you want to have a navigator. We now have driverless cars. And drones are widely used for military and economic purposes. I have listed just a very few cases when robots help us in our daily lives. We are built into this world and we are accustomed to it.

Still, electronics can complicate and hinder our lives. For instance, you drive the most recent Mercedes model. Your car automatically brakes if you follow too closely and your steering wheel turns in an unexpected way. At the same time, if you drive an old car without any electronic equipment, you feel in control of the situation. The behavior of the machine depends entirely on your actions.

Classical and Non-Classical Logic

Thinking in the context of social epistemology is plugged into empirical reality. This fact is considered usually as an abandonment of logic. But this is not so. The fact is that classical logic has exhausted itself. A new logic, radically different from the classical one, is just emerging. What is the difference?

David Hume, one of the founders of the classical philosophy, wrote about the British and French. They are different people, of course, but philosophically they have a common feature—they are humans. Take another example. You are talking to the same person in different situations. In the office, this person is not the same as they are at home or in the street. As a rule, it is not important to you that you deal every time with the same person, this is obvious without any justification. The person is interesting from the point of view of their characteristics as a member of work team or as a family member. Every person manifests themselves in a specific way in a concrete situation. And this fact is taken into account in a new type of logic. This logic is rooted in specific frameworks.

We can see the attention to specific sociality in the formation of social epistemology. It is necessary to understand, in Fuller’s opinion, why scientists receive different results when they generally have the same set of books, the same knowledge, and the same conditions of work. Fuller pays attention to what surrounds the scientist here and now and not in the past. The history and the process of scientific knowledge development is understood by us with our logical means. As a result, they inevitably become some part of our present.

The notion of space becomes more important than the notion of time. Gilles Deleuze wrote about this in his logic. Robert Frodeman identifies his approach as “field philosophy”. This name identifies features of our current thinking. Russian philosopher Merab Mamardashvilly thought that in order to understand emerging scientific knowledge it is necessary that it be considered outside the “arrow of time”.

The former connection between the past and future, in order to deduce a new result from previous knowledge. is not suitable. In the last century, dialog became more widespread. Its logical justification in science was given in the scientific revolution of the beginning of 20th century physics. For us, it is important to notice that quantum mechanics replaced classical physics on the front lines of the development of science. But classical physics was not destroyed and its proponents continue to work and give society useful results. This feature of the non-classical scientific logic is noteworthy: it does not declare its predecessor as not scientific, as not having the ability to decide corresponding problems. Moreover, this new logic needs its predecessor and dialogical communication with it. In the course of this dialog both sides change, trying to improve their positions in the same when two people talk.

That is why I do not agree with <a href="http://” target=”_blank”>Justin Cruickshank (2015) when he writes that Karl Popper’s idea of a fallibilism is connected in some way with dialog. For Popper, the main aim is to criticize and, in the end, to destroy to falsify a theory, in order replace it with a new theory. As a result, dialog becomes impossible because for it we need to have at least two interlocutors or theories. For Popper, an ideal situation is when we deal with one person, a winner. In Russia, the topic of dialog was studied by Mikhail Bakhtin and Vladimir Bibler.

Context

Dialog is one of the forms of communication between different events in the history. If we consider, as an ideal, all studied events from the point of view of their common characteristics, we then deal with one person and we have nobody for dialog. The differing conditions of a scientist’s, or any other person’s, work is not taken into consideration. We have classical thinking—one subject, one object, one logic.

As I understand Ilya Kasavin (2017), he does not investigate the construction of Kara-Kum Canal as an inference from the Peter the Great’s plan. A connection exists between these two projects. Yet, each of them is considered as unique, as having its own context. So, it is not correct to ask Kasavin: “What traces and records were left of the project imagined by Peter the Great, how were they interpreted and reinterpreted over the course of hundreds the years, and how, if at all, did they influence Stalin’s project?” (Bakhurst and Sismondo, 2017). The “arrow of time” as a coherent chain of events from Peter the Great to Stalin exists. But it is not important in the frame of non-classical thinking to study first, and in all detail, this chain for the understanding the situation with the construction of Kara-Kum Canal.

The same may be said about the emergence of transhumanism as a scientific area. It is created in the context that is formed from the outside world by choosing those elements which would be able to help us to comprehend some problem. One of the most important features of the context is the presence of both ideal elements (the past scientific knowledge, for instance), and the material elements in world existing around us. Context, as a whole, is the beginning of a new result when we think, and it is not surprising that we have a notion of transhumanism containing the ability of thinking and material carrier of a thought. Robotics corresponds to this understanding of transhumanism and that helps us to see the border between human and robot as less defined.

Conclusion

We see the current signs of human transformation which were seemingly impossible just a few decades ago. Even those who are against such changes do not object to them when they seek medical help or when they have the opportunity to facilitate their everyday life. In many cases, then, radical changes go against our will and we do not protest against them.

We are creating our artificial world on the basis of the knowledge not only of the material world, but also of our thinking. We put this knowledge into the surrounding world in the process of investigation, and we cannot imagine it without the ability to think. The world is becoming able to think, to understand us, to answer our questions.

As our thinking becomes different, we notice its turn. It is directed not at nature, at the world around us, but at humans. At the same time, nature acquires certain human characteristics. This turn is the basis of many serious problems connected initially with notions of the truth and objectivity of scientific knowledge. But these problems are not the topic of this comment.

References  

Bakhurst, David and Sergio Sismondo. “Commentary on Ilya Kasavin’s ‘Towards a Social Philosophy of Science: Russian Prospects’.” Social Epistemology Review and Reply Collective 6, no. 4 (2017): 20-23.

Cruickshank, Justin. “Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practices.” Social Epistemology 29, no. 1 (2015): 73-94.

Frodeman, Robert. “Anti-Fuller: Transhumanism and the Proactionary Imperative.” Social Epistemology Review and Reply Collective 4, no. 4 (2015): 38-43.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Kasavin, Ilya. “Towards a Social Philosophy of Science: Russian Prospects.” Social Epistemology 31, no. 1 (2017): 1-15.

Author Information: Ben Ross, University of North Texas, benjamin.ross@my.unt.edu

Ross, Ben. “Between Poison and Remedy: Transhumanism as Pharmakon.Social Epistemology Review and Reply Collective 6, no. 5 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3zU

Please refer to:

Image credit: Jennifer Boyer, via flickr

As a Millennial, I have the luxury of being able to ask in all seriousness, “Will I be the first generation safe from death by old age?” While the prospects of answering in the affirmative may be dim, they are not preposterous. The idea that such a question can even be asked with sincerity, however, testifies to transhumanism’s reach into the cultural imagination.

But what is transhumanism? Until now, we have failed to answer in the appropriate way, remaining content to describe its possible technological manifestations or trace its historical development. Therefore, I would like to propose an ontology of transhumanism. When philosophers speak of ontologies, they are asking a basic question about the being of a thing—what is its essence? I suggest that transhumanism is best understood as a pharmakon.

Transhumanism as a Pharmakon

Derrida points out in his essay “Plato’s Pharmacy” that while pharmakon can be translated as “drug,” it means both “remedy” and “poison.” It is an ambiguous in-between, containing opposite definitions that can both be true depending on the context. As Michael Rinella notes, hemlock, most famous for being the poison that killed Socrates, when taken in smaller doses induces “delirium and excitement on the one hand,” yet it can be “a powerful sedative on the other” (160). Rinella also goes on to say that there are more than two meanings to the term. While the word was used to denote a drug, Plato “used pharmakon to mean a host of other things, such as pictorial color, painter’s pigment, cosmetic application, perfume, magical talisman, and recreational intoxicant.” Nevertheless, Rinella makes the crucial remark that “One pharmakon might be prescribed as a remedy for another pharmakon, in an attempt to restore to its previous state an identity effaced when intoxicant turned toxic” (237-238). It is precisely this “two-in-one” aspect of the application of a pharmakon that reveals it to be the essence of transhumanism; it can be both poison and remedy.

To further this analysis, consider “super longevity,” which is the subset of transhumanism concerned with avoiding death. As Harari writes in Homo Deus, “Modern science and modern culture…don’t think of death as a metaphysical mystery…for modern people death is a technical problem that we can and should solve.” After all, he declares, “Humans always die due to some technical glitch” (22). These technical glitches, i.e. when one’s heart ceases to pump blood, are the bane of researchers like Aubrey de Grey, and fixing them forms the focus of his “Strategies for Engineered Negligible Senescence.” There is nothing in de Grey’s approach to suggest that there is any human technical problem that does not potentially have a human technical solution. Grey’s techno-optimism represents the “remedy-aspect” of transhumanism as a view in which any problems—even those caused by technology—can be solved by technology.

As a “remedy,” transhumanism is based on a faith in technological progress, despite such progress being uneven, with beneficial effects that are not immediately apparent. For example, even if de Grey’s research does not result in the “cure” for death, his insight into anti-aging techniques and the resulting applications still have the potential to improve a person’s quality of life. This reflects Max More’s definition of transhumanism as “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (3).

Importantly, More’s definition emphasizes transcendent enhancement, and it is this desire to be “upgraded” which distinguishes transhumanism. An illustration of the emergence of the upgrade mentality can be seen in the history of plastic surgery. Harari writes that while modern plastic surgery was born during the First World War as a treatment to repair facial injuries, upon the war’s end, surgeons found that the same techniques could be applied not to damaged noses, but to “ugly” ones, and “though plastic surgery continued to help the sick and wounded…it devoted increasing attention to upgrading the healthy” (52). Through its secondary use as an elective surgery of enhancement rather than exclusively as a technique for healing, one can see an example of the evolution of transhumanist philosophy out of medical philosophy—if the technology exists to change one’s face (and they have they money for it), a person should be morphologically free to take advantage of the enhancing capabilities of such a procedure.

However, to take a view of a person only as “waiting to be upgraded” marks the genesis of the “poison-aspect” of transhumanism as a pharmakon. One need not look farther than Martin Heidegger to find an account of this danger. In his 1954 essay, “The Question Concerning Technology,” Heidegger suggests that the threat of technology is ge-stell, or “enframing,” the way in which technology reveals the world to us primarily as a stock of resources to be manipulated. For him, the “threat” is not a technical problem for which there is a technical solution, but rather it is an ontological condition from which we can be saved—a condition which prevents us from seeing the world in any other way. Transhumanism in its “poison mode,” then, is the technological understanding of being—a singular way of viewing the world as a resource waiting to be enhanced. And what is problematic is that this way of revealing the world comes to dominate all others. In other words, the technological understanding of being comes to be the understanding of being.

However, a careful reading of Heidegger’s essay suggests that it is not a techno-pessimist’s manifesto. Technology has pearls concealed within its perils. Heidegger suggests as much when he quotes Hölderlin, “But where danger is, grows the saving power also” (333). Heidegger is asking the reader to avoid either/or dichotomous thinking about the essence of technology as something that is either dangerous or helpful, and instead to see it as a two-in-one. He goes to great lengths to point out that the “saving power” of technology, which is to say, of transhumanism, is that its essence is ambiguous—it is a pharmakon. Thus, the self-same instrumentalization that threatens to narrow our understanding of being also has the power to save us and force a consideration of new ways of being, and most importantly for Heidegger, new meanings of being.

Curing Death?

A transhumanist, and therefore pharmacological, take on Heidegger’s admonishment might be something as follows: In the future it is possible that a “cure” for death will threaten what we now know as death as a source of meaning in society—especially as it relates to a Christian heaven in which one yearns to spend an eternity, sans mortal coil. While the arrival of a death-cure will prove to be “poison” for a traditional understanding of Christianity, that same techno-humanistic artifact will simultaneously function as a “remedy,” spurring a Nietzschean transvaluation of values—that is, such a “cure” will arrive as a technological Zarathustra, forcing a confrontation with meaning, bringing news that “the human being is something that must be overcome” and urging us to ask anew, “what have you done to overcome him?” At the very least, as Steve Fuller recently pointed out in an interview, “transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection.” For those sympathetic to Leon Kass’ brand of repugnance, such suggestions are poison, and yet for a transhumanist such suggestions are a remedy to the glitch called death and the ways in which we relate to our finitude.

A more mundane example of the simultaneous danger and saving power of technology might be the much-hyped Google Glass—or in more transhuman terms, having Google Glass implanted into one’s eye sockets. While this procedure may conceal other ways of understanding the spaces and people surrounding the wearer other than through the medium of the lenses, the lenses simultaneously have the power to reveal entirely new layers of information about the world and connect the wearer to the environment and to others in new ways.

With these examples it is perhaps becoming clear that by re-casting the essence of transhumanism as a pharmakon instead of an either/or dichotomy of purely techno-optimistic panacea or purely techno-pessimistic miasma, a more inclusive picture of transhumanist ontology emerges. Transhumanism can be both—cause and cure, danger and savior, threat and opportunity. Max More’s analysis, too, has a pharmacological flavor in that transhumanism, though committed to improving the human condition, has no illusions that, “The same powerful technologies that can transform human nature for the better could also be used in ways that, intentionally or unintentionally, cause direct damage or more subtly undermine our lives” (4).

Perhaps, then, More might agree that as a pharmakon, transhumanism is a Schrödinger’s cat always in a state of superposition—both alive and dead in the box. In the Copenhagen interpretation, a system stops being in a superposition of states and becomes either one or the other when an observation takes place. Transhumanism, too, is observer-dependent. For Ray Kurzweil, looking in the box, the cat is always alive with the techno-optimistic possibility of download into silicon and the singularity is near. For Ted Kaczynski, the cat is always dead, and it is worth killing in order to prevent its resurrection. Therefore, what the foregoing analysis suggests is that transhumanism is a drug—it is both remedy and poison—with the power to cure or the power to kill depending on who takes it. If the essence of transhumanism is elusive, it is precisely because it is a pharmakon cutting across categories ordinarily seen as mutually exclusive, forcing an ontological quest to conceptualize the in-between.

References

Derrida, Jacques. “Plato’s Pharmacy.” In Dissemination, translated by Barbara Johnson, 63-171. Chicago: University of Chicago Press, 1981.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. HarperCollins, 2017.

Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell. Harper & Row, 1977.

More, Max. “The Philosophy of Transhumanism,” In The Transhumanist Reader, edited by Max More and Natasha Vita-More, 3-17. Malden, MA: Wiley-Blackwell, 2013.

Rinella, Michael A. Pharmakon: Plato, Drug Culture, and Identity in Ancient Athens. Lanham, MD: Lexington Books, 2010.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Counterfactuals in the White House:  A Glimpse into Our Post-Truth Times.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3z1

Image credit: OZinOH, via flickr

May Day 2017 was filled with reporting and debating over a set of comments that US President Trump made while visiting Andrew Jackson’s mansion, the ‘Hermitage’, now a tourist attraction in Nashville, Tennessee. Trump said that had Jackson been deployed, he could have averted the US Civil War. Since Jackson had died about fifteen years before the war started, Trump was clearly making a counterfactual claim. However, it is an interesting claim—not least for its responses, which were fast and furious. They speak to the nature of our times.  Let me start with the academic response and then move to how I think about the matter. A helpful compendium of the responses is here.

Jim Grossman of the American Historical Association spoke for all by claiming that Trump ‘is starting from the wrong premise’. Presumably, Grossman means that the Civil War was inevitable because slavery is so bad that a war over it was inevitable. However well he meant this comment, it feeds into the anti-expert attitude of our post-truth era. Grossman seems to disallow Trump from imagining that preserving the American union was more important than the end of slavery—even though that was exactly how the issue was framed to most Americans 150 years ago. Scholarship is of course mainly about explaining why things happened the way they did. However, there is a temptation to conclude that it necessarily had to happen that way. Today’s post-truth culture attempts to curb this tendency. In any case, once the counterfactual door is open to other possible futures, historical expertise becomes more contestable, perhaps even democratised. The result may be that even when non-experts reach the same conclusion as the experts, it may be for importantly different reasons.

Who was Andrew Jackson?

Andrew Jackson is normally regarded as one of the greatest US presidents, whose face is regularly seen on the twenty-dollar banknote. He was the seventh president and the first one who was truly ‘self-made’ in the sense that he was not well educated, let alone oriented towards Europe in his tastes, as had been his six predecessors. It would not be unfair to say that he was the first President who saw a clear difference between being American and being European. In this respect, his self-understanding was rather like that of the heroes of Latin American independence. He was also given to an impulsive manner of public speech, not so different from the current occupant of the Oval Office.

Jackson volunteered at age thirteen to fight in the War of Independence from Britain, which was the first of many times when he was ready to fight for his emerging nation. Over the past fifty years much attention has been paid to his decimation of native American populations at various points in his career, both military and presidential, as well as his support for slavery. (Howard Zinn was largely responsible, at least at a popular level, for this recent shift in focus.) To make a long and complicated story short, Jackson was rather consistent in acting in ways that served to consolidate American national identity, even if that meant sacrificing the interests of various groups at various times—groups that arguably never recovered from the losses inflicted on them.

Perhaps Jackson’s most lasting positive legacy has been the current two-party—Democratic/Republican—political structure. Each party cuts across class lines and geographical regions. This achievement is now easy to underestimate—as the Democratic Party is now ruing. The US founding fathers were polarized about the direction that the fledgling nation should take, precisely along these divides. The struggles began in Washington’s first administration between his treasury minister Alexander Hamilton and his foreign minister Thomas Jefferson—and they persisted. Both Hamilton and Jefferson oriented themselves to Europe, Hamilton more in terms of what to imitate and Jefferson in terms of what to avoid. Jackson effectively performed a Gestalt switch, in which Europe was no longer the frame of reference for defining American domestic and foreign policy.

Enter Trump

Now enter Donald Trump, who says Jackson could have averted the Civil War, which by all counts was one of the bloodiest in US history, with an estimated two million lives in total lost. Jackson was clearly a unionist but also clearly a slaveholder. So one imagines that Jackson would have preserved the union by allowing slaveholding, perhaps in terms of some version of the ‘states rights’ or ‘popular sovereignty’ doctrine, which gives states discretion over how they deal with economic matters. It’s not unreasonable that Jackson could have pulled that off, especially because the economic arguments for allowing slavery were stronger back then than they are now normally remembered.

The Nobel Prize winning economic historian Robert Fogel explored this point quite thoroughly more than forty years ago in his controversial Time on the Cross. It is not a perfect work, and its academic criticism is quite instructive about how one might improve exploring a counterfactual world in which slavery would have persisted in the US until it was no longer economically viable. Unfortunately, the politically sensitive nature of the book’s content has discouraged any follow-up. When I first read Fogel, I concluded that over time the price of slaves would come to approximate that of free labour considered over a worker’s lifetime. In other words, a slave economy would evolve into a capitalist economy without violence in the interim. Slaveholders would simply respond to changing market conditions. So, the moral question is whether it would have made sense to extend slavery over a few years before it would end up merging with what the capitalist world took to be an acceptable way of being, namely, wage labour. Fogel added ballast to his argument by observing that slaves tend to live longer and healthier lives than freed Blacks.

Moreover, Fogel’s counterfactual was not fanciful. Some version of the states rights doctrine was the dominant sentiment in the US prior to the Civil War. However, there were many different versions of the doctrine which could not rally around a common spokesperson. This allowed the clear unitary voice for abolition emanating from the Christian dissenter community in the Northern states to exert enormous force, not least on the sympathetic and ambitious country lawyer, Abraham Lincoln, who became their somewhat unlikely champion. Thus, 1860 saw a Republican Party united around Lincoln fend off three Democrat opponents in the general election.

None of this is to deny that Lincoln was right in what he did. I would have acted similarly. Moreover, he probably did not anticipate just how bloody the Civil War would turn out to be—and the lasting scars it would leave on the American psyche. But the question on the table is not whether the Civil War was a fair price to pay to end slavery. Rather, the question is whether the Civil War could have been avoided—and, more to the point of Trump’s claim, whether Jackson would have been the man to do it. The answer is perhaps yes. The price would have been that slavery would have been extended for a certain period before it became economically unviable for the slaveholders.

It is worth observing that Fogel’s main target seemed to be Marxists who argued that slavery made no economic sense and that it persisted in the US only because of racist ideology.  Fogel’s response was that slaveholders probably were racist, but such a de facto racist economic regime would not have persisted as long as it did, had both sides not benefitted from the arrangement. In other words, the success of the anti-slavery campaign was largely about the triumph of aspirational ideas over actual economic conditions. If anything, its success testifies to the level of risk that abolitionists were willing to assume on behalf of American society for the emancipation of slaves. Alexis de Tocqueville was only the most famous of foreign US commentators to notice this at the time. Abolitionists were the proactionaries of their day with regard to risk. And this is how we should honour them now.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).

Shortlink: http://wp.me/p1Bfg0-3yV

Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the University of Warwick. He is the author of more than twenty books, the next of which is Post-Truth: Knowledge as a Power Game (Anthem).

Shortlink: http://wp.me/p1Bfg0-3yI

Note: This article originally appeared in the EASST Review 36(1) April 2017 and is republished below with the permission of the editors.

Image credit: Hans Luthart, via flickr

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public—if not outright invented—at least four common post-truth tropes:

1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.

2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.

3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.

4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable—but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game—perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.

References

Fuller, Steve. ‘Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.’ Social Epistemology Review and Reply Collective December, 2016: http://wp.me/p1Bfg0-3nx.

Latour, Bruno. ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.’ Critical Inquiry 30, no. 2 (2004) : 225–248.

Oreskes, Naomi and Erik M. Conway Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2011.

Sismondo, Sergio. ‘Post-Truth?’ Social Studies of Science 47, no. 1 (2017): 3-6.

The following are a set of questions concerning the place of transhumanism in the Western philosophical tradition that Robert Frodeman’s Philosophy 5250 class at the University of North Texas posed to Steve Fuller, who met with the class via Skype on 11 April 2017.

Shortlink: http://wp.me/p1Bfg0-3yl

Image credit: Joan Sorolla, via flickr

1. First a point of clarification: we should understand you not as a health span increaser, but rather as interested in infinity, or in some sense in man becoming a god? That is, H+ is a theological rather than practical question for you?

Yes, that’s right. I differ from most transhumanists in stressing that short term sacrifice—namely, in the form of risky experimentation and self-experimentation—is a price that will probably need to be paid if the long-term aims of transhumanism are to be realized. Moreover, once we finally make the breakthrough to extend human life indefinitely, there may be a moral obligation to make room for future generations, which may take the form of sending the old into space or simply encouraging suicide.

2. How do you understand the relationship between AI and transhumanism?

When Julian Huxley coined ‘transhumanism’ in the 1950s, it was mainly about eugenics, the sort of thing that his brother Aldous satirized in Brave New World. The idea was that the transhuman would be a ‘new and improved’ human, not so different from new model car. (Recall that Henry Ford is the founding figure of Brave New World.) However, with the advent of cybernetics, also happening around the same time, the idea that distinctly ‘human’ traits might be instantiated in both carbon and silicon began to be taken seriously, with AI being the major long-term beneficiary of this line of thought. Some transhumanists, notably Ray Kurzweil, find the AI version especially attractive, perhaps because it caters to their ‘gnostic’ impulse to have the human escape all material constraints. In the transhumanist jargon, this is called ‘morphological freedom’, a sort of secular equivalent of pure spirituality. However, this is to take AI in a somewhat different direction from its founders in the era of cybernetics, which was about creating intelligent machines from silicon, not about transferring carbon-based intelligence into silicon form.

3. How seriously do you take talk (by Bill Gates and others) that AI is an existential risk?

Not very seriously— at least on its own terms. By the time some superintelligent machine might pose a genuine threat to what we now regard as the human condition, the difference between human and non-human will have been blurred, mainly via cyborg identities of the sort that Stephen Hawking might end up being seen as having been a trailblazer. Whatever political questions would arise concerning AI at that point would likely divide humanity itself profoundly and not be a simple ‘them versus us’ scenario. It would be closer to the Cold War choice of Communism vs Capitalism. But honestly, I think all this ‘existential risk’ stuff gets its legs from genuine concerns about cyberwarfare. But taken on its face, cyberwarfare is nothing more than human-on-human warfare conducted by high tech means. The problem is still mainly with the people fighting the war rather than the algorithms that they program to create these latest weapons of mass destruction. I wonder sometimes whether this fixation on superintelligent machines is simply an indirect way to get humans to become responsible for their own actions—the sort of thing that psychoanalysts used to call ‘displacement behavior’ but the rest of us call ‘ventriloquism’.

4. If, as Socrates claims, to philosophize is to learn how to die, does H+ represent the end of philosophy?

Of course not!  The question of death is just posed differently because even from a transhumanist standpoint, it may be in the best interest of humanity as a whole for individuals to choose death, so as to give future generations a chance to make their mark. Alternatively, and especially if transhumanists are correct that our extended longevity will be accompanied by rude health, then the older and wiser among us —and there is no denying that ‘wisdom’ is an age-related virtue—might spend their later years taking greater risks, precisely because they would be better able to handle the various contingencies. I am thinking that such healthy elderly folk might be best suited to interstellar exploration because of the ultra-high risks involved. Indeed, I could see a future social justice agenda that would require people to demonstrate their entitlement to longevity by documenting the increasing amount of risk that they are willing to absorb.

5. What of Heidegger’s claim that to be an authentic human being we must project our lives onto the horizon of our death?

I couldn’t agree more! Transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection. I think Heidegger and other philosophers have invested such great import on death simply because of its apparent irreversibility. However, if you want to recreate Heidegger’s sense of ‘ultimate concern’ in a post-death world, all you would need to do is to find some irreversible processes and unrecoverable opportunities that even transhumanists acknowledge. A hint is that when transhumanism was itself resurrected in its current form, it was known as ‘extropianism’, suggesting an active resistance to entropy. For transhumanists—very much in the spirit of the original cybernetician, Norbert Wiener—entropy is the ultimate irreversible process and hence ultimate challenge for the movement to overcome.

6. What is your response to Heidegger’s claim that it is in the confrontation with nothingness, in the uncanny, that we are brought back to ourselves?

Well, that certainly explains the phenomenon that roboticists call the ‘uncanny valley’, whereby people are happy to deal with androids until they resemble humans ‘a bit too much’, at which point people are put off. There are two sides to this response—not only that the machines seem too human but also that they are still recognized as machines. So the machines haven’t quite yet fooled us into thinking that they’re one of us. One hypothesis to explain the revulsion is that such androids appear to be like artificially animated dead humans, a bit like Frankenstein. Heideggerians can of course use all this to their advantage to demonstrate that death is the ultimate ‘Other’ to the human condition.

7. Generally, who do you think are the most important thinkers within the philosophic tradition for thinking about the implications of transhumanism?

Most generally, I would say the Platonic tradition, which has been most profound in considering how the same form might be communicated through different media. So when we take seriously the prospect that the ‘human’ may exist in carbon and/or silicon and yet remain human, we are following in Plato’s footsteps. Christianity holds a special place in this line of thought because of the person of Jesus Christ, who is somehow at once human and divine in equal and all respects. The branch of theology called ‘Christology’ is actually dedicated to puzzling over these matters, various solutions to which have become the stuff of science fiction characters and plots. St Augustine originally made the problem of Christ’s identity a problem for all of humanity when he leveraged the Genesis claim that we are created ‘image and the likeness of God’ to invent the concept of ‘will’ to name the faculty of free choice that is common to God and humans. We just exercise our wills much worse than God exercises his, as demonstrated by Adam’s misjudgment which started Original Sin (an Augustinian coinage). When subsequent Christian thinkers have said that ‘the flesh is weak’, they are talking about how humanity’s default biological conditions holds us back from fully realizing our divine potential. Kant acknowledged as much in secular terms when he explicitly defined the autonomy necessary for truly moral action in terms of resisting the various paths of least resistance put before us. These are what Christians originally called ‘temptations’, Kant himself called ‘heteronomy’ and Herbert Marcuse in a truly secular vein would later call ‘desublimation’.

8. One worry that arises from the Transhumanism project (especially about gene editing, growing human organs in animals, etc.) regards the treatment of human enhancement as “commercial products”. In other words, the worry is concerns the (further) commodification of life. Does this concern you? More generally, doesn’t H+ imply a perverse instrumentalization of our being?

My worries about commodification are less to do with the process itself than the fairness of the exchange relations in which the commodities are traded. Influenced by Locke and Nozick, I would draw a strong distinction between alienation and exploitation, which tends to be blurred in the Marxist literature. Transhumanism arguably calls for an alienation of the body from human identity, in the sense that your biological body might be something that you trade for a silicon upgrade, yet you humanity remains intact on both sides of the transaction, at least in terms of formal legal recognition. Historic liberal objections to slavery rested on a perceived inability to do this coherently. Marxism upped the ante by arguing that the same objections applied to wage labor under the sort of capitalism promoted by the classical political economists of his day, who saw themselves as scientific underwriters of the new liberal order emerging in post-feudal Europe. However, the force of Marxist objections rest on alienation being linked to exploitation. In other words, not only am I free to sell my body or labor, but you are also offer whatever price serves to close the sale. However, the sorts of power imbalances which lay behind exploitation can be—and have been—addressed in various ways. Admittedly more work needs to be done, but a time will come when alienation is simply regarded as a radical exercise of freedom—specifically, the freedom to, say, project myself as an avatar in cyberspace or, conversely, convert part of my being to property that can be traded from something that may in turn enhance my being.

9. Robert Nozick paints a possible scenario in Anarchy, State, and Utopia where he describes a “genetic supermarket” where we can choose our genes just as one selects a frozen pizza. Nozick’s scenario implies a world where human characteristics are treated in the way we treat other commercial products. In the Transhuman worldview, is the principle or ultimate value of life commercial?

There is something to that, in the sense that anything that permits discretionary choice will lend itself to commercialization unless the state intervenes—but I believe that the state should intervene and regulate the process. Unfortunately, from a PR standpoint, a hundred years ago that was called ‘eugenics’. Nevertheless, people in the future may need to acquire a license to procreate, and constraints may even be put on the sort of offspring are and are not permissible, and people may even be legally required to undergo periodic forms of medical surveillance—at least as a condition of employment or welfare benefits. (Think Gattaca as a first pass at this world.) It is difficult to see how an advanced democracy that acknowledges already existing persistent inequalities in life-chances could agree to ‘designer babies’ without also imposing the sort of regime that I am suggesting. Would this unduly restrict people’s liberty? Perhaps not, if people will have acquired the more relaxed attitude to alienation, as per my answer to the previous question. However, the elephant in the room—and which I argued in The Proactionary Imperative is more important—is liability. In other words, who is responsible when things go wrong in a regime which encourages people to experiment with risky treatments? This is something that should focus the minds of lawyers and insurers, especially in a world are presumed to be freer per se because they have freer access to information.

10. Is human enhancement consistent with other ways in which people modify their lifestyles, that is, are they analogous in principle to buying a new cell phone, learning a language or working out? Is it a process of acquiring ideas, goods, assets, and experiences that distinguish one person from another, either as an individual or as a member of a community? If not, how is human enhancement different?

‘Human enhancement’, at least as transhumanists understand the phrase, is about ‘morphological freedom’, which I interpret as a form of ultra-alienation. In other words, it’s not simply about people acquiring things, including prosthetic extensions, but also converting themselves to a different form, say, by uploading the contents of one’s brain into a computer. You might say that transhumanism’s sense of ‘human enhancement’ raises the question of whether one can be at once trader and traded in a way that enables the two roles to be maintained indefinitely. Classical political economy seemed to imply this, but Marx denied its ontological possibility.

11. The thrust of 20th Century Western philosophy could be articulated in terms of the strife for possible futures, whether that future be Marxist, Fascist, or other ideologically utopian schemes, and the philosophical fallout of coming to terms with their successes and failures. In our contemporary moment, it appears as if widespread enthusiasm for such futures has disappeared, as the future itself seems as fragmented as our society. H+ is a new, similar effort; but it seems to be a specific evolution of the futurism focused, not on a society, but on the human person (even, specific human persons). Comments?

In terms of how you’ve phrased your question, transhumanism is a recognizably utopian scheme in nearly all respects—including the assumption that everyone would find its proposed future intrinsically attractive, even if people disagree on how or whether it might be achieved. I don’t see transhumanism as so different from capitalism or socialism as pure ideologies in this sense. They all presume their own desirability. This helps to explain why people who don’t agree with the ideology are quickly diagnosed as somehow mentally or morally deficient.

12. A common critique of Heidegger’s thought comes from an ethical turn in Continental philosophy. While Heidegger understands death to the harbinger of meaning, he means specifically and explicitly one’s own death. Levinas, however, maintains that the primary experience of death that does this work is the death of the Other. One’s experience with death comes to one through the death of a loved one, a friend, a known person, or even through the distant reality of a war or famine across the world. In terms of this critique, the question of transhumanism then leads to a socio-ethical concern: if one, using H+ methods, technologies, and enhancements, can significantly inoculate oneself against the threat of death, how ethically (in the Levinasian sense) can one then legitimately live in relation to others in a society, if the threat of the death of the Other no longer provides one the primal experience of the threat of death?

Here I’m closer to Heidegger than Levinas in terms of grounding intuition, but my basic point would be that an understanding of the existence and significance of death is something that can be acquired without undergoing a special sort of experience. Phenomenologically inclined philosophers sometimes seem to assume that a significant experience must happen significantly. But this is not true at all. My main understanding of death as a child came not from people I know dying, but simply from watching the morning news on television and learning about the daily body count from the Vietnam War. That was enough for me to appreciate the gravity of death—even before I started reading the Existentialists.

Editor’s Note:

    The following are elements of syllabi for a graduate, and an undergraduate, course taught by Robert Frodeman in spring 2017 at the University of North Texas. These courses offers an interesting juxtaposition of texts aimed at reimagining how to perform academic philosophy as “field philosophy”. Field philosophy seeks to address meaningfully, and demonstrably, contemporary public debates, regarding transhumanism for example, given attention to shifting ideas and frameworks of both the Humboldtian university and the “new American” university.

Shortlink: http://wp.me/p1Bfg0-3xB

Philosophy 5250: Topics in Philosophy

Overall Theme

This course continues my project of reframing academic philosophy within the approach and problematics of field philosophy.

In terms of philosophic categories, we will be reading classics in 19th and 20th century continental philosophy: Hegel, Nietzsche, and Heidegger. But we will be approaching these texts with an agenda: to look for insights into a contemporary philosophical controversy, the transhumanist debate. This gives us two sets of readings – our three authors, and material from the contemporary debate surrounding transhumanism.

Now, this does not mean that we will restrict our interest in our three authors to what is applicable to the transhumanist debate; our thinking will go wherever our interests take us. But the topic of transhumanism will be primus inter pares.

Readings

  • Hegel, Phenomenology of Spirit, Preface
  • Hegel, The Science of Logic, selections
  • Heidegger, Being and Time, Division 1, Macquarrie translation
  • Heidegger, ‘The Question Concerning Technology’
  • Nietzsche, selections from Thus Spoke Zarathustra and Beyond Good and Evil

Related Readings

Grading

You will have two assignments, both due at the end of the semester. I strongly encourage you to turn in drafts of your papers.

  • A 2500 word paper on a major theme from one of our three authors.
  • A 2500 word paper using our three authors to illuminate your view of the transhumanist challenge.

Philosophy 4750: Philosophy and Public Policy

Overview

This is a course in meta-philosophy. It seeks to develop a philosophy adequate for the 21st century.

Academic philosophy has been captured by a set of categories (ancient, modern, contemporary; ethics, logic, metaphysics, epistemology) that are increasingly dysfunctional for contemporary life. Therefore, this is not merely a course on a specific subject matter (i.e., ‘public policy’) to be added to the rest. Rather, it seeks to question, and philosophize about, the entire knowledge enterprise as it exists today – and to philosophize about the role of philosophy in understanding and perhaps (re)directing the knowledge enterprise.

The course will cover the following themes:

  • The past, present, and future of the university in the Age of Google
  • The end of disciplinarity and the rise of accountability culture
  • The New Republic of Letters and the role of the humanist today
  • The failure of applied philosophy and the development of alternative models

Course Structure

This course is ‘live’: it reflects 20 years of my research on place of philosophy in contemporary society. As such, the course embodies a Humboldtian connection between teaching and research: I am not simply a teacher and a researcher; I’m a teacher-researcher who shares the insights I’m developing with students, testing my thinking in the classroom, and sharing my freshest thoughts. This breaks with the corporate model of education where the professor is an interchangeable cog, teaching the same materials that could be gotten at any university worldwide – while also opening me up to charges of self-indulgence.

Readings

  • Michael M. Crow and William B. Dabars, Designing the New American University
  • Crow chapter in HOI
  • Clark, Academic Charisma
  • Fuller, The Academic Caesar
  • Rudy, The Universities of Europe, 1100-1914
  • Fuller, Sociology of Intellectual Life
  • Smith, Philosophers 6 Types
  • Socrates Tenured: The Institutions of 21st Century Philosophy
  • Plato, The Republic, Book 1