Archives For Steve Fuller

Author Information: Stephen Turner, University of South Florida, turner@usf.edu

Turner, Stephen. “Fuller’s roter Faden.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 25-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3WX

Art by William Blake, depicting the creation of reality.
Image via AJC1 via Flickr / Creative Commons

The Germans have a notion of “research intention,” by which they mean the underlying aim of an author’s work as revealed over its whole trajectory. Francis Remedios and Val Dusek have provided, if not an account itself, the material for an account of Steve Fuller’s research intention, or as they put it the “thread” that runs through his work.

These “intentions” are not something that is apparent to the authors themselves, which is part of the point: at the start of their intellectual journey they are working out a path which leads they know not where, but which can be seen as a path with an identifiable beginning and end retrospectively. We are now at a point where we can say something about this path in the case of Fuller. We can also see the ways in which various Leitmotifs, corollaries, and persistent themes fit with the basic research intention, and see why Fuller pursued different topics at different times.

A Continuity of Many Changes

The ur-source for Fuller’s thought is his first book, Social Epistemology. On the surface, this book seems alien to the later work, so much so that one can think of Fuller as having a turn. But seen in terms of an underlying research intention, and indeed in Fuller’s own self-explications included in this text, this is not the case: the later work is a natural development, almost an entailment, of the earlier work, properly understood.

The core of the earlier work was the idea of constructing a genuine epistemology, in the sense of a kind of normative account of scientific knowledge, out of “social” considerations and especially social constructivism, which at the time was considered to be either descriptive or anti-epistemological, or both. For Fuller, this goal meant that the normative content would at least include, or be dominated by, the “social” part of epistemology, considerations of the norms of a community, norms which could be changed, which is to say made into a matter of “policy.”

This leap to community policies leads directly to a set of considerations that are corollaries to Fuller’s long-term project. We need an account of what the “policy” options are, and a way to choose between them. Fuller was trained at a time when there was a lingering controversy over this topic: the conflict between Kuhn and the Popperians. Kuhn represented a kind of consensus driven authoritarianism. For him it was right and necessary for science to be organized around ungroundable premises that enabled science to be turned into puzzle-solving, rather than insoluble disputes over fundamentals. These occurred, and produced new ungroundable consensual premises, at the rare moments of scientific revolutions.

Progress was possible through these revolutions, but our normal notions of progress were suspended during the revolutions and applied only to the normal puzzle-solving phase of science. Popperianism, on the contrary, ascribed progress to a process of conjecture and refutation in which ever broader theories developed to account for the failures of previous conjectures, in an unending process.

Kuhnianism, in the lens of Fuller’s project in Social Epistemology, was itself a kind of normative epistemology, which said “don’t dispute fundamentals until the sad day comes when one must.” Fuller’s instincts were always with Popper on this point: authoritarian consensus has no place in science for either of them. But Fuller provided a tertium quid, which had the effect of upending the whole conflict. He took over the idea of the social construction of reality and gave it a normative and collective or policy interpretation. We make knowledge. There is no knowledge that we do not create.

The creation is a “social” activity, as the social constructivists claimed. But this social itself needed to be governed by a sense of responsibility for these acts of creation, and because they were social, this meant by a “policy.” What this policy should be was not clear: no one had connected the notion of construction to the notion of responsibility in this way. But it was a clear implication of the idea of knowledge as a product of making. Making implies a responsibility for the consequences of making.

Dangers of Acknowledging Our Making

This was a step that few people were willing to take. Traditional epistemology was passive. Theory choice was choice between the theories that were presented to the passive chooser. The choices could be made on purely epistemic grounds. There was no consideration of responsibility, because the choices were an end point, a matter of scientific aesthetics, with no further consequences. Fuller, as Remedios and Dusek point out, rejects this passivity, a rejection that grows directly out of his appropriation of constructivism.

From a “making” or active epistemic perspective, Kuhnianism is an abdication of responsibility, and a policy of passivity. But Fuller also sees that overcoming the passivity Kuhn describes as the normal state of science, requires an alternative policy, which enables the knowledge that is in fact “made” but which is presented as given, to be challenged. This is a condition of acknowledging responsibility for what is made.

There is, however, an oddity in talking about responsibility in relation to collective knowledge producing, which arises because we don’t know in advance where the project of knowledge production will lead. I think of this on analogy to the debate between Malthus and Marx. If one accepts the static assumptions of Malthus, his predictions are valid: Marx made the productivist argument that with every newborn mouth came two hands. He would have been better to argue that with every mouth came a knowledge making brain, because improvements in food production technology enabled the support of much larger populations, more technology, and so forth—something Malthus did not consider and indeed could not have. That knowledge was in the future.

Fuller’s alternative grasps this point: utilitarian considerations from present static assumptions can’t provide a basis for thinking about responsibility or policy. We need to let knowledge production proceed regardless of what we think are the consequences, which is necessarily thinking based on static assumptions about knowledge itself. Put differently, we need to value knowledge in itself, because our future is itself made through the making of knowledge.

“Making” or “constructing” is more than a cute metaphor. Fuller shows that there is a tradition in science itself of thinking about design, both in the sense of making new things as a form of discovery, and in the sense of reverse engineering that which exists in order to see how it works. This leads him to the controversial waters of intelligent design, in which the world itself is understood as, at least potentially, the product of design. It also takes us to some metaphysics about humans, human agency, and the social character of human agency.

One can separate some of these considerations from Fuller’s larger project, but they are natural concomitants, and they resolve some basic issues with the original project. The project of constructivism requires a philosophical anthropology. Fuller provides this with an account of the special character of human agency: as knowledge maker humans are God-like or participating in the mind of God. If there is a God, a super-agent, it will also be a maker and knowledge maker, not in the passive but in the active sense. In participating in the mind of God, we participate in this making.

“Shall We Not Ourselves Have to Become Gods?”

This picture has further implications: if we are already God-like in this respect, we can remake ourselves in God-like ways. To renounce these powers is as much of a choice as using them. But it is difficult for the renouncers to draw a line on what to renounce. Just transhumanism? Or race-related research? Or what else? Fuller rejects renunciation of the pursuit of knowledge and the pursuit of making the world. The issue is the same as the issue between Marx and Malthus. The renouncers base their renunciation on static models. They estimate risks on the basis of what is and what is known now. But these are both things that we can change. This is why Fuller proposes a “pro-actionary” rather than a precautionary stance and supports underwriting risk-taking in the pursuit of scientific advance.

There is, however, a problem with the “social” and policy aspect of scientific advance. On the one hand, science benefits humankind. On the other, it is an elite, even a form of Gnosticism. Fuller’s democratic impulse resists this. But his desire for the full use of human power implies a special role for scientists in remaking humanity and making the decisions that go into this project. This takes us right back to the original impulse for social epistemology: the creation of policy for the creation of knowledge.

This project is inevitably confronted with the Malthus problem: we have to make decisions about the future now, on the basis of static assumptions we have no real alternative to. At best we can hint at future possibilities which will be revealed by future science, and hope that they will work out. As Remedios and Dusek note, Fuller is consistently on the side of expanding human knowledge and power, for risk-taking, and is optimistic about the world that would be created through these powers. He is also highly sensitive to the problem of static assumptions: our utilities will not be the utilities of the creatures of the future we create through science.

What Fuller has done is to create a full-fledged alternative to the conventional wisdom about the science society relation and the present way of handling risk. The standard view is represented by Philip Kitcher: it wishes to guide knowledge in ways that reflect the values we should have, which includes the suppression of certain kinds of knowledge by scientists acting paternalistically on behalf of society.

This is a rigidly Malthusian way of thinking: the values (in this case a particular kind of egalitarianism that doesn’t include epistemic equality with scientists) are fixed, the scientists ideas of the negative consequences of something like research on “racial” differences are taken to be valid, and policy should be made in accordance with the same suppression of knowledge. Risk aversion, especially in response to certain values, becomes the guiding “policy” of science.

Fuller’s alternative preserves some basic intuitions: that science advances by risk taking, and by sometimes failing, in the manner of Popper’s conjectures and refutations. This requires the management of science, but management that ensures openness in science, supports innovation, and now and then supports concerted efforts to challenge consensuses. It also requires us to bracket our static assumptions about values, limits, risks, and so forth, not so much to ignore these things but to relativize them to the present, so that we can leave open the future. The conventional view trades heavily on the problem of values, and the potential conflicts between epistemic values and other kinds of values. Fuller sees this as a problem of thinking in terms of the present: in the long run these conflicts vanish.

This end point explains some of the apparent oddities of Fuller’s enthusiasms and dislikes. He prefers the Logical Positivists to the model-oriented philosophy of science of the present: laws are genuinely universal; models are built by assuming present knowledge and share the problems with Malthus. He is skeptical about science done to support policy, for the same reason. And he is skeptical about ecologism as well, which is deeply committed to acting on static assumptions.

The Rewards of the Test

Fuller’s work stands the test of reflexivity: he is as committed to challenging consensuses and taking risks as he exhorts others to be. And for the most part, it works: it is an old Popperian point that only through comparison with strong alternatives that a theory can be tested; otherwise it will simply pile up inductive support, blind to what it is failing to account for. But as Fuller would note, there is another issue of reflexivity here, and it comes at the level of the organization of knowledge. To have conjectures and refutations one must have partners who respond. In the consensus driven world of professional philosophy today, this does not happen. And that is a tragedy. It also makes Fuller’s point: that the community of inquirers needs to be managed.

It is also a tragedy that there are not more Fullers. Constructing a comprehensive response to major issues and carrying it through many topics and many related issues, as people like John Dewey once did, is an arduous task, but a rewarding one. It is a mark of how much the “professionalization” of philosophy has done to alter the way philosophers think and write. This is a topic that is too large for a book review, but it is one that deserves serious reflection. Fuller raises the question by looking at science as a public good and asking how a university should be organized to maximize its value. Perhaps this makes sense for science, given that science is a money loser for universities, but at the same time its main claim on the public purse. For philosophy, we need to ask different questions. Perhaps the much talked about crisis of the humanities will bring about such a conversation. If it does, it is thinking like Fuller’s that will spark the discussion.

Contact details: turner@usf.edu

References

Remedios, Francis X., and Val Dusek. Knowing Humanity in the Social World. The Path of Steve Fuller’s Social Epistemology. New York: Palgrave MacMillan, 2018.

Author Information: Alcibiades Malapi-Nelson, Seneca College, alci.malapi@outlook.com

Malapi-Nelson, Alcibiades. “Transhumanism and the Catholic Church.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 12-17.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3WM

You don’t become the world’s oldest continuing institution without knowing how to adapt to the times.
Image by Lawrence OP via Flickr / Creative Commons.

Most accounts on transhumanism coming from Catholic circles show a mild to radical rejection to the idea of a deep alteration, by means of pervasive emergent technologies, of whatever we understand as “human nature”. These criticisms come from both progressive and conservative Catholic flanks. However, as it is increasingly becoming evident, the left/right divide is no longer capturing ethical, political and philosophical stances in an accurate manner.

There are cross-linked concerns which transcend such traditional dichotomy. The Church, insofar as it also is a human institution, is not immune to this ongoing ‘rotating axis’. The perceived Catholic unfriendliness to transhumanism stems from views that do not take into account the very mission that defines the Church’s existence.

Conceptions of Human Dignity

To be sure, there are aspects of transhumanism that may find fundamental rejection when confronted to Church doctrine—particularly in what concerns human dignity. In this context, attempts for accomplishing indefinite life extension will not find fertile ground in Catholic milieus. Needless to say, the more vulgar aspects of the transhumanist movement—such as the fashionable militant atheism sponsored by some, or the attempt to simply replace religion with technology—would not find sympathy either. However, precisely due to an idiosyncratically Catholic attention to human dignity, attempts at the improvement of the human experience shall certainly attract the attention of the Magisterium.

Perhaps more importantly, and not unrelated to a distinctly Catholic understanding of personal self-realization, the Church will have to cope with the needs that a deeply altered human condition will entail. Indeed, the very cause for the Church to exist is self-admittedly underpinned by the fulfillment of a particular service to humans: Sacrament delivery. Hence, the Magisterium has an ontological interest (i.e., pertaining to what counts as human) in better coping with foreseeable transhumanist alterations, as well as a functional one (e.g., to ensure both proper evangelization and the fulfilling of its sacramental prime directive).

The Church is an institution that thinks, plans and strategizes in terms of centuries. A cursory study of its previous positions regarding the nature of humanity reveals that the idea of “the human” never was a monolithic, static notion. Indeed, it is a fluid one that has been sponsored and defended under different guises in previous eras, pressed by sui-generis apostolic needs. As a guiding example, one could pay attention to the identity-roots of that area of the globe which currently holds more than 60% of the Catholic world population: Latin America. It is well documented how the incipient attempts at an articulation of “human rights”, coming from the School of Salamanca in the 16th century (epitomized by Francisco Vitoria, Francisco Suárez—the Jesuit who influenced Leibnitz, Schopenhauer and Heidegger—and indirectly, by Bartolomé de las Casas), had as an important aspect of its agenda the extension of the notion of humanity to the hominid creatures found inhabiting the “West Indies”—the Americas.

The usual account of Heilsgeschichte (Salvation History), canonically starting with the narrative of the People of God and ending up with the Roman Empire, could not be meaningfully conveyed to this newly-found peoples, given that the latter was locked in an absolutely parallel world. In fact, a novel “theology of charity” had to be developed in order to spread the Good News, without referencing a (non-existent) “common history”. Their absolute humanity had to be thus urgently established, so that, unlike the North American Protestant experience, widespread legalized slavery would not ensue—task which was partly accomplished via the promulgation of the 1538 encyclical Sublimis Deus.

Most importantly, once their humanity was philosophically and legally instituted, the issue regarding the necessary services for both their salvation and their self-development immediately emerged (To be sure, not everyone agreed in such extension of humanity). Spain sent an average of three ‘apostolic agents’ – priests – per day to fulfill this service. The controversial nature of the “Age of Discovery” notwithstanding, the Spanish massive mobilization may partly account for the Church being to this day perhaps the most trusted institution in Latin America. Be that as it may, we can see here a paradigmatic case were the Church extended the notion of humanity to entities with profoundly distinct features, so that it could successfully fulfill its mission: Sacrament delivery. Such move arguably guaranteed the worldwide flourishing, five centuries later, of an institution of more than a billion people.

A Material Divinity

Although the Church emphasises an existing unity between mind and body, it is remarkable that in no current authoritative document of the Magisterium (e.g., Canon Law, Catechism, Vatican Council II, etc.) the “human” is inextricably linked with a determinate corporeal feature of the species homo-sapiens. Namely, although both are profoundly united, one does not depend on the other. In fact, the soul/spirit comes directly from God. What defines us as humans have less to do with the body and its features and more to do with the mind, spirit and will.

Once persons begin to radically and ubiquitously change their physical existences, the Church will have to be prepared to extend the notion of humanity to these hybrids. Not only will these entities need salvation, but they will need to flourish in this life as self-realized individuals—something that according to Catholic doctrine is solidly helped by sacrament reception. Moreover, if widespread deep alteration of humanoid ‘biologies’ were to occur, the Church has a mandate of evangelization to them as well. This will likely encourage apostolic agents to become familiarized with these novel ways of corporeal existence in order to better understand them—even embrace them in order further turn them into vehicles of evangelization themselves.

We have a plethora of historical examples in related contexts, from the Jesuit grammatization of the Inka language to Marshall McLuhan’s prophetic expertise in human communications—having influenced the Second Vatican Council’s Inter Mirifica document on the topic. Indeed, “morphological freedom” (the right and ability to alter our physical existence) might become for the Church what philosophy of communication became for McLuhan.

Thus, chances are that the Church will need to embrace a certain instantiation of a transhuman future, given that the institution will have to cope with a radically changed receptacle of the grace-granting devices – the Sacraments. Indeed, this shall be done in order to be consistent with the reason for its very existence as mandated by Christ: guaranteeing the constant flow of these efficacious means which collaborate towards both a fulfilled existence in this life and salvation in the next one. Steve Fuller foresees a possible scenario that may indeed become just such transhuman ‘instantiation’ favoured by the Church:

A re-specification of the “human” to be substrate-neutral (that is to say, a “human” need not be the descendant of another member of Homo sapiens but rather could be a status conferred on any suitably qualified entity, as might be administered by a citizenship test or even a Turing Test).

Judging from its track record, the Church will problematically but ultimately successfully raise up to the challenge. A substrate-neutral re-specification of the human may indeed be the route taken by the Church—perhaps after a justifiably called Concilium.

An homage to a legendary series of portraits by Francis Bacon.
Image by Phineas Jones via Flickr / Creative Commons

Examining the Sacraments

The challenge will be variously instantiated in correlation with the sacraments to be delivered. However, all seven of them share one feature that will be problematized with the implementation of transhumanist technologies: Sacraments perform metaphysically what they do physically. Their efficacy in the spiritual world is mirrored by the material function performed in this one (e.g., the pouring of water in baptism). Since our bodies may change at a fundamental level, maintaining the efficacy of sacraments, which need physical substrata to work, will be the common problem. Let us see how this problem may variously incarnate.

Baptism. As the current notion of humanity stands (“an entity created in the image and likeness of God”) not much would have to change in order to extend it to an altered entity claiming to maintain, or asking to receive, human status. A deep alteration of our bodies constitutes no fundamental reason for not participating of the realm “human” and thus, enter the Catholic Church by means of Baptism: The obliteration of the legacy of Original Sin with which humans are born—either by natural means, cloned or harvested (A similar reasoning could be roughly applied to Confirmation). Holy water can be poured on flesh, metal or a new alloy constituting someone’s forehead. As indicated above, the Church does not mention “flesh” as a sine qua non condition for humanity to obtain.

On the other hand, there is a scenario, more post-human than transhuman in nature, that may emerge as a side effect out of the attempts to ameliorate the human condition: Good Old Fashion Artificial Intelligence. If entities that share none of the features (bodily, historically, cognitively, biologically) we usually associate with humanity begin to claim human status on account of displaying both rationality and autonomy, then the Church may have to go through one of its most profound “aggiornamentos” in two millennia of operation.

Individual tests administered by local bishops on a case-by-case basis (after a fundamental directive coming from the Holy See) would likely have to be put in place – which would aim to assess, for instance, the sincerity of the entity’s prayer. It is a canonical signature of divine presence in an individual the persistent witnessing of an ongoing metanoia (conversion). A consistent life of self-giving and spiritual warfare could be the required accepted signs for this entity being declared a child of God, equal to the rest of us, granting its entrance into the Church with all the entailing perks (i.e. the full array of sacraments).

There is a caveat that is less problematic for Catholic doctrine than for modern society: Sex assignation. Just as the ‘natural machinery’ already comes with one, the artificial one could have it as well. Male or female could happen also in silico. Failure to do so would carry the issue to realms not dissimilar with current disputes of “sex reassignation” and its proper recognition by society: It might be a problem, but it would not be a new problem. The same reasoning would apply to “post-gender” approaches to transhumanism.

Confession. Given that the sacrament of Reconciliation has to be obligatorily performed, literally, vis à vis, what if environmental catastrophes reduce our physical mobility so that we can no longer face a priest? Will telepresence be accepted by the Church? Will the Church establish strict protocols of encryption? After all it is an actual confession that we are talking about: Only a priest can hear it—and only the Pope, on special cases, can hear it from him.

Breaking the confessional seal entails excommunicatio ipso facto. Moreover, regarding a scenario which will likely occur within our lifetimes, what about those permanently sent into space? How will they receive this sacrament? Finally, even if the Church permanently bans the possibility of going to confession within a virtual environment, what would happen if people eventually inhabit physical avatars? Would that count as being physically next to a priest?

Communion. The most important of all sacraments, the Eucharist, will not the void of issues either. The Latin Rite of the Catholic Church (the portion of Catholics who are properly ‘Roman’) mandates that only unleavened bread shall be used as the physical substratum, so that it later transubstantiates into the body of Christ. The Church is particularly strict in this, as evinced in cases were alternative breads have been used (e.g., when stranded for years on a deserted island), not recognizing those events as properly Eucharistic: the sacrament never took place in such occasions.

Nevertheless, we will have to confront situations were the actual bread could not be sent to remote locations of future human dwelling (e.g., Mars), nor a priest will be present to perform the said metaphysical swapping. Facing this, would nanotechnology provide the solution? Would something coming out of a 3D printer or a future “molecular assembler” qualify as the actual unleavened bread?

Marriage. This sacrament will likely confront two main challenges; one fundamentally novel in nature and the second one an extension of already occurring issues. Regarding the latter, let us take in consideration a particular thread in certain transhumanist circles: The pursuit of indefinite life extension. It is understood that once people either become healthier longer (or stop aging), the creation of new life via offspring may become an after-thought. Canon Law clearly stipulates that those who consciously made a decision not to procreate can not enter this sacrament. In that sense, a children-less society would be constituted by sacramentally unmarried people. Once again, this issue is a variation of already occurring scenarios—which could be extended, for that matter, to sex-reassigned people.

The former challenge mentioned would be unprecedented. Would the Church marry a human and a machine? Bear in mind that this question is fundamentally different from the already occurring question regarding the Church refusing to marry humans and non-human animals. The difference is based upon the lack of autonomy and rationality shown by the latter. However, machines could one day show both (admittedly Kantian) human-defining features. The Church may find in principle no obstacle to marry a human “1.0” and a human “2.0” (or even a human and an artificial human—AI), provided that the humanity of the new lifeforms, following the guidelines established by the requirements for Baptism, is well established.

Holy Orders. As with Marriage, this sacrament will likely face a twist both on an already occurring scenario and a fairly new one. On the one hand, the physical requirement of a bishop actually posing his hands on someone’s head to ordain him a priest, has carried problematic cases for the Church (e.g., during missions where bishops were not available). With rare exceptions, this requirement has always been observed. A possible counter case is the ordination of Stylite monks between the 3rd and 6th century. These hermits made vows to not come down from their solitary pillar until death.

Reportedly, sometimes bishops ordained them via an “action at a distance” of sorts—but still from merely a few meters away. The Church will have to establish whether ordaining someone via telepresence (or inhabiting an avatar) would count as sacramentally valid. On the other hand, the current requirement for a candidate for priesthood to have all his limbs—particularly his hands—up until the moment of ordination might face softening situations. At the moment where a prosthetic limb not only seamlessly becomes an extension of the individual, but a better functional extension of him, the Church may reconsider this pre-ordination requirement.

Extreme Unction. The Last Rites will likely confront two challenges in a transhuman world. One would not constitute properly a problem for its deliverance, but rather a questioning of the point of its existence. The other will entail a possible redefinition of what is considered to be ‘dead’. In what refers to the consequences of indefinite life extension, this sacrament may be considered by Catholics what Protestants consider of the sacraments (and hence of the Church): Of no use. Perhaps the sacrament would stay put for those who choose to end their lives “naturally” (in itself a problem for transhumanists: What to do with those who do not want to get “enhanced”?) Or perhaps the Church will simply ban this particular transhumanist choice of life for Catholics, period—as much as it now forbids euthanasia and abortion. The science fiction series Altered Carbon portrays a future where such is the case.

On the other hand, the prospect of mind uploading may push to redefine the notion of what it means to leave this body, given that such experience may not necessarily entail death. If having consciousness inside a super-computer is defined as being alive—which as seen above may be in principle accepted by the Church—then the delivery of the sacrament would have to be performed without physicality, perhaps via a link between the software-giver and the software-receiver. This could even open up possibilities for sacrament-delivery to remote locations.

The Future of Humanity’s Oldest Institution

As we can see, the Church may not have to just tolerate, but actually embrace, the transhumanist impulses slowly but steadily pushed by science and technology into the underpinnings of the human ethos. This attitude shall emerge motivated by two main sources: On the one hand, a fundamental option towards the development of human dignity—which by default would associate the Church more to a transhumanist philosophy than to a post-human one.

On the other, a fundamental concern for the continuing fulfilling of its own mission and reason of existence—the delivery of sacraments to a radically altered human recipient. As a possible counterpoint, it has been surmised that Pope Francis’ is one of the strongest current advocates for a precautionary stance—a position being traditionally associated with post-human leanings. The Pontiff’s Laudato Si encyclical on the environment certainly seems to point to this direction. That may be part of a—so far seemingly successful—strategy put in place by the Church for decades to come, whose reasons escape the scope of this piece. However, as shown above, the Church, given its own history, philosophy, and prime mandate, has all the right reasons to embrace a transhuman future—curated the Catholic way, that is.

Contact details: alci.malapi@outlook.com

References

Fuller, Steve. “Ninety Degree Revolution.” Aeon Magazine. 20 October 2013. Retrieved from https://aeon.co/essays/left-and-right-are-over-the-future-is-up-and-down.

Fuller, Steve. “Which Way Is Up for the Human Condition?” ABC Religion and Ethics. 26 August 2015. Retrieved from http://www.abc.net.au/religion/articles/2015/08/26/4300331.htm.

Fuller, Steve. “Beyond Good and Evil: The Challenges of Trans- and Post-Humanism.” ABC Religion and Ethics. 20 December 2016. Retrieved from http://www.abc.net.au/religion/articles/2016/12/20/4595400.htm.

Author Information: Steve Fuller, University of Warwick, UK, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Against Virtue and For Modernity: Rebooting the Modern Left.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 51-53.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3S9

Toby Ziegler’s “The Liberals: 3rd Version.” Photo by Matt via Flickr / Creative Commons

 

My holiday message for the coming year is a call to re-boot the modern left. When I was completing my doctoral studies, just as the Cold War was beginning to wind down, the main threat to the modern left was seen as coming largely from within. ‘Postmodernism’ was the name normally given to that threat, and it fuelled various culture, canon and science wars in the 1980s and 1990s.

Indeed, even I was – and, in some circles, continue to be – seen as just such an ‘enemy of reason’, to recall the name of Richard Dawkins’ television show in which I figured as one of the accused. However, in retrospect, postmodernism was at most a harbinger for a more serious threat, which today comes from both the ‘populist’ supporters of Trump, Brexit et al. and their equally self-righteous academic critics.

Academic commentators on Trump, Brexit and the other populist turns around the world seem unable to avoid passing moral judgement on the voters who brought about these uniformly unexpected outcomes, the vast majority of which the commentators have found unwelcomed. In this context, an unholy alliance of virtue theorists and evolutionary psychologists have thrived as diagnosticians of our predicament. I say ‘unholy’ because Aristotle and Darwin suddenly find themselves on the same side of an argument, now pitched against the minds of ‘ordinary’ people. This anti-democratic place is not one in which any self-respecting modern leftist wishes to be.

To be sure, virtue theorists and evolutionary psychologists come to the matter from rather different premises – the one metaphysical if not religious and the other naturalistic if not atheistic. Nevertheless, they both regard humanity’s prospects as fundamentally constrained by our mental makeup. This makeup reflects our collective past and may even be rooted in our animal nature. Under the circumstances, so they believe, the best we can hope is to become self-conscious of our biases and limitations in processing information so that we don’t fall prey to the base political appeals that have resulted in the current wave of populism.

These diagnosticians conspicuously offer little of the positive vision or ambition that characterised ‘progressive’ politics of both liberal and socialist persuasions in the nineteenth and twentieth centuries. But truth be told, these learned pessimists already have form. They are best seen as the culmination of a current of thought that has been percolating since the end of the Cold War effectively brought to a halt Marxism as a world-historic project of human emancipation.

In this context, the relatively upbeat message advanced by Francis Fukuyama in The End of History and the Last Man that captivated much of the 1990s was premature. Fukuyama was cautiously celebrating the triumph of liberalism over socialism in the progressivist sweepstakes. But others were plotting a different course, one in which the very terms on which the Cold War had been fought would be superseded altogether. Gone would be the days when liberals and socialists vied over who could design a political economy that would benefit the most people worldwide. In its place would be a much more precarious sense of the world order, in which overweening ambition itself turned out to be humanity’s Achilles Heel, if not Original Sin.

Here the trail of books published by Alasdair MacIntyre and his philosophical and theological admirers in the wake of After Virtue ploughed a parallel field to such avowedly secular and scientifically minded works as Peter Singer’s A Darwinian Left and Steven Pinker’s The Blank Slate. These two intellectual streams, both pointing to our species’ inveterate shortcomings, gained increasing plausibility in light of 9/11’s blindsiding on the post-Cold War neo-liberal consensus.

9/11 tore up the Cold War playbook once and for all, side-lining both the liberals and the socialists who had depended on it. Gone was the state-based politics, the strategy of mutual containment, the agreed fields of play epitomized in such phrases as ‘arms race’ and ‘space race’. In short, gone was the game-theoretic rationality of managed global conflict. Thus began the ongoing war on ‘Islamic terror’. Against this backdrop, the Iraq War proved to be colossally ill-judged, though no surprise given that its mastermind was one of the Cold War’s keenest understudies, Donald Rumsfeld.

For the virtue theorists and evolutionary psychologists, the Cold War represented as far as human rationality could go in pushing back and channelling our default irrationality, albeit in the hope of lifting humanity to a ‘higher’ level of being. Indeed, once the USSR lost the Cold War to the US on largely financial grounds, the victorious Americans had to contend with the ‘blowback’ from third parties who suffered ‘collateral damage’ at many different levels during the Cold War. After all, the Cold War, for all its success in averting nuclear confrontation, nevertheless turned the world into a playing field for elite powers. ‘First world’, ‘second world’ and ‘third world’ were basically the names of the various teams in contention on the Cold War’s global playing field.

So today we see an ideological struggle whose main players are those resentful (i.e. the ‘populists’) and those regretful (i.e. the ‘anti-populists’) of the entire Cold War dynamic. The only thing that these antagonists appear to agree on is the folly of ‘progressivist’ politics, the calling card of both modern liberalism and socialism. Indeed, both the populists and their critics are fairly characterised as somehow wanting to turn back the clock to a time when we were in closer contact with the proverbial ‘ground of being’, which of course the two sides define in rather different terms. But make no mistake of the underlying metaphysical premise: We are ultimately where we came from.

Notwithstanding the errors of thought and deed committed in their names, liberalism and socialism rightly denied this premise, which placed both of them in the vanguard – and eventually made them world-historic rivals – in modernist politics. Modernity raised humanity’s self-regard and expectations to levels that motivated people to build a literal Heaven on Earth, in which technology would replace theology as the master science of our being. David Noble cast a characteristically informed but jaundiced eye at this proposition in his 1997 book, The Religion of Technology: The Divinity of Man and the Spirit of Invention. Interestingly, John Passmore had covered much the same terrain just as eruditely but with greater equanimity in his 1970 book, The Perfectibility of Man. That the one was written after and the other during the Cold War is probably no accident.

I am mainly interested in resurrecting the modernist project in its spirit, not its letter. Many of modernity’s original terms of engagement are clearly no longer tenable. But I do believe that Silicon Valley is comparable to Manchester two centuries ago, namely, a crucible of a radical liberal sensibility – call it ‘Liberalism 2.0’ or simply ‘Alt-Liberalism’ – that tries to use the ascendant technological wave to leverage a new conception of the human being.

However one judges Marx’s critique of liberalism’s scientific expression (aka classical political economy), the bottom line is that his arguments for socialism would never have got off the ground had liberalism not laid the groundwork for him. As we enter 2018 and seek guidance for launching a new progressivism, we would do well to keep this historical precedent in mind.

Contact details: S.W.Fuller@warwick.ac.uk

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Veritism as Fake Philosophy: Reply to Baker and Oreskes.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 47-51.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3M3

Image credit: elycefeliz, via flickr

John Stuart Mill and Karl Popper would be surprised to learn from Baker and Oreskes (2017) that freedom is a ‘non-cognitive’ value. Insofar as freedom—both the freedom to assert and the freedom to deny—is a necessary feature of any genuine process of inquiry, one might have thought that it was one of the foundational values of knowledge. But of course, Baker and Oreskes are using ‘cognitive’ in a more technical sense, one introduced by the logical positivists and remains largely intact in contemporary analytic epistemology and philosophy of science. It was also prevalent in post-war history and sociology of science prior to the rise of STS. This conception of the ‘cognitive’ trades on a clear distinction between what lies ‘inside’ and ‘outside’ a conceptual framework—in this case, the conceptual framework of science. But there’s a sting in the tail.

An Epistemic Game

Baker and Oreskes don’t seem to realize that this very conception of the ‘cognitive’ is in the post-truth mould that I defend. After all, for the positivists, ‘truth’ is a second order concept that lacks any determinate meaning except relative to the language in terms of which knowledge claims can be expressed. It was in this spirit that Rudolf Carnap thought that Thomas Kuhn’s ‘paradigm’ had put pragmatic flesh on the positivists’ logical bones (Reisch 1991). (It is worth emphasizing that Carnap passed this judgement before Kuhn’s fans turned him into the torchbearer for ‘post-positivist’ philosophy of science.) At the same time, this orientation led the positivists to promote—and try to construct—a universal language of science into which all knowledge claims could be translated and evaluated.

All of this shows that the positivists weren’t ‘veritists’ because, unlike Baker and Oreskes, they didn’t presuppose the existence of some univocal understanding of truth that all sincere inquirers will ultimately reach. Rather, truth is just a general property of the language that one decides to use—or the game one decides to play. In that case ‘truth’ corresponds to satisfying ‘truth conditions’ as specified by the rules of a given language, just as ‘goal’ corresponds to satisfying the rules of play in a given game.

To be sure, the positivists complicated matters because they also took seriously that science aspires to command universal assent for its knowledge claims, in which case science’s language needs to be set up in a way that enables everyone to transact their knowledge claims inside it; hence, the need to ‘reduce’ such claims to their calculable and measurable components. This effectively put the positivists in partial opposition to all the existing sciences of their day, each with its own parochial framework governed by the rules of its distinctive language game. The need to overcome this tendency explains the project of an ‘International Encyclopedia of Unified Science’.

In short, logical positivism was about designing an epistemic game—which they called ‘science’—that anyone could play and potentially win.

Given some of the things that Baker and Oreskes impute to me, they may be surprised to learn that I actually think that the logical positivists—as well as Mill and Popper—were on the right track. Indeed, I have always believed this. But these views have nothing to do with ‘veritism’, which I continue to put in scare quotes because, in the spirit of our times, it’s a bit of ‘fake philosophy’. It may work to shore up philosophical authority in public but fails to capture the conflicting definitions and criteria that philosophers themselves have offered not only for ‘truth’ but also for such related terms as ‘evidence’ and ‘validation’. All of these key epistemological terms are essentially contested concepts within philosophy. It is not simply that philosophers disagree on what is, say, ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’. (I summarize the issue here.)

Philosophical Fakeness

Richard Rorty became such a hate figure among analytic philosophers because he called out the ‘veritists’ on their fakeness. Yes, philosophers can tell you what truth is, but just as long as you accept a lot of contentious assumptions—and hope those capable of contending those assumptions aren’t in the room when you’re speaking!  Put another way, Rorty refused to adopt a ‘double truth’ doctrine for philosophy, whereby amongst themselves philosophers adopt a semi-detached attitude towards various conflicting conceptions of truth while at the same time presenting a united front to non-philosophers, lest these masses start to believe some disreputable things.

The philosophical ‘fakeness’ of veritism is exemplified in the following sentence, which appears in Baker and Oreskes’ (2017, 69) latest response:

On the contrary, truth (along with evidence, facts, and other words science studies scholars tend to relegate to scare quotes) is a far more plausible choice for one of a potential plurality of regulative ideals for an enterprise that, after all, does have an obviously cognitive function.

The sentence prima facie commits the category mistake of presuming that ‘truth’ is one more—albeit preferred—possible regulative ideal of science alongside, say, instrumental effectiveness, cultural appropriateness, etc. However, ‘truth’ in the logical positivist sense is a feature of all regulative ideals of science, each of which should be understood as specifying a language game that is governed by its own validation procedures—the rules of the game, if you will—in terms of which one theory is determined (or ‘verified’) to be, say, more effective than another or more appropriate than another.

Notice I said ‘prima facie.’ My guess is that when Baker and Oreskes say ‘truth’ is a regulative ideal of science, they are simply referring to a social arrangement whereby the self-organizing scientific community is the final arbiter on all knowledge claims accepted by society at large. As they point out, the scientific community can get things wrong—but things become wrong only when the scientific community says so, and they become fixed only when the scientific community says so. In short, under the guise of ‘truth’, Baker and Oreskes are advocating what I have called ‘cognitive authoritarianism’ (Fuller 1988, chapter 12).

Before ending with a brief discussion of what I think may be true about ‘veritism’, it is difficult not to notice the moralism associated with Baker and Oreskes’ invocation of ‘truth’. This carries over to such other pseudo-epistemic concepts as ‘trust’ and ‘reliability’, which are seen as marks of the scientific character, whereby ‘scientific’ attaches both to a body of knowledge and the people who produce that knowledge. I say ‘pseudo’ because there is no agreed measure of these qualities.

Regarding Trust

‘Trust’ is a quality whose presence is felt mainly as a double absence, namely, a studied refusal to examine knowledge claims for oneself which is subsequently judged to have had non-negative consequences.  (I have called trust a ‘phlogistemic’ concept for this reason, as it resembles the pseudo-element phlogiston, Fuller 1996). Indeed, in opposition to this general sensibility, I have gone so far as to argue that universities should be in the business of ‘epistemic trust-busting’. Here is my original assertion:

In short, universities function as knowledge trust-busters whose own corporate capacities of “creative destruction” prevent new knowledge from turning into intellectual property (Fuller 2002, 47; italics in original).

By ‘corporate capacities’, I meant the various means at the university’s disposal to ensure that the people in a position to take forward new knowledge are not simply part of the class of those who created it in the first place. More concretely, of course I have in mind ordinary teaching that aims to express even the most sophisticated concepts in terms ordinary students can understand and use. But also I mean to include ‘affirmative action’ policies that are specifically designed to incorporate a broader range of people than might otherwise attend the university. Taken together, these counteract the ‘neo-feudalism’ to which academic knowledge production is prone—‘rent-seeking’, if you will—which Baker and Oreskes appear unable to recognize.

As for ‘reliability’, it is a term whose meaning depends on specifying the conditions—say, in the design of an experiment—under which a pattern of behaviour is expected to occur. Outside of such tightly defined conditions, which is where most ‘scientific controversies’ happen, it is not clear how cases should be classified and counted, and hence what ‘reliable’ means. Indeed, STS has not only drawn attention to this fact but it has gone further—say, in the work of Harry Collins—to question whether even lab-based reliability is possible without some sort of collusion between researchers. In other words, the social accomplishment of ‘reliable knowledge’ is at least partly an expression of solidarity among members of the scientific community—a closing of the ranks, to put it less charitably.

An especially good example of the foregoing is what has been dubbed ‘Climategate’, which involved the releasing of e-mails from the UK’s main climate science research group in response to a journalist’s Freedom of Information request. While no wrongdoing was formally established, the e-mails did reveal the extent to which scientists from across the world effectively conspired to present the data for climate change in ways that obscured interpretive ambiguities, thereby pre-empting possible appropriations by so-called ‘climate change sceptics’. To be sure, from the symmetrical normative stance of classic STS, Climategate simply reveals the micro-processes by which a scientific consensus is normally and literally ‘manufactured’. Nevertheless, I doubt that Baker and Oreskes would turn to Climategate as their paradigm case of a ‘scientific consensus’. But why not?

The reason is that they refuse to acknowledge the labour that is involved in securing collective assent over any significant knowledge claim. As I observed in my original response (2017) to Baker and Oreskes, one might be forgiven for concluding from reading the likes of Merton, Habermas and others who see consensus formation as essential to science that an analogue of the ‘invisible hand’ is at play. On their telling, informed people draw the same conclusions from the same evidence. The actual social interaction of the scientists carries little cognitive weight in its own right. Instead it simply reinforces what any rational individual is capable of inferring for him- or herself in the same situation. At most, other people provide additional data points but they don’t alter the rules of right reasoning. Ironically, considering Baker and Oreskes’ allergic reaction to any talk of science as a market, this image of Homo scientificus to which they attach themselves seems rather like what they don’t like about Homo oeconomicus.

Climbing the Mountain

The contrasting view of consensus formation, which I uphold, is more explicitly ‘rhetorical’. It appeals to a mix of strategic and epistemic considerations in a setting where the actual interaction between the parties sets the parameters that defines the scope of any possible consensus. Although Kuhn also valorized consensus as the glue that holds together normal science puzzle-solving, to his credit he clearly saw its rhetorical and even coercive character, from pedagogy to peer review. For this reason, Kuhn is the one who STSers still usually cite as a precursor on this matter. Unlike Baker and Oreskes, he didn’t resort to the fake philosophy of ‘veritism’ to cover up the fact that truth is ultimately a social achievement.

Finally, I suggested that there may be a way of redeeming ‘veritism’ from its current status of fake philosophy. Just because ‘truth’ is what W.B. Gallie originally called an ‘essentially contested concept’, it doesn’t follow that it is a mere chimera. But how to resolve truth’s palpable diversity of conceptions into a unified vision of reality? The clue to redemption is provided by Charles Sanders Peirce, whose idea of truth as the final scientific consensus informs Baker and Oreskes’ normative orientation. Peirce equated truth with the ultimate theory of everything, which amounts to putting everything in its place, thereby resolving all the internal disagreements of perception and understanding that are a normal feature of any active inquiry. It’s the moment when the blind men in the Hindu adage discover the elephant they’ve been groping and (Popper’s metaphor) the climbers coming from different directions reach the same mountain top.[1]

Peirce’s vision was informed by his understanding of John Duns Scotus, the early fourteenth scholastic who provided a deep metaphysical understanding of Augustine’s Platonic reading of the Biblical Fall of humanity. Our ‘fallen’ state consists in the dismemberment of our divine nature, something that is regularly on display in the variability of humans with regard to the virtues, all of which God displays to their greatest extent. For example, the most knowledgeable humans are not necessarily the most benevolent. The journey back to God is basically one of putting these pieces—the virtues—back together again into a coherent whole.

At the level of organized inquiry, we find a similar fragmentation of effort, as the language game of each science exaggerates certain modes of access to reality at the expense of others. To be sure, Kuhn and STS accept, if not outright valorise, disciplinary specialisation as a mark of the increasing ‘complexification’ of the knowledge system. Not surprisingly, perhaps, they also downplay the significance in the sort of capital ‘T’ sense of ‘truth’ that Baker and Oreskes valorise. One obvious solution would be for defenders of ‘veritism’ to embrace an updated version of the ‘unified science’ project championed by the logical positivists, which aimed to integrate all forms of knowledge in terms of some common currency of intellectual exchange. (My earlier comments against ‘neo-feudal’ tendencies in academia should be seen in this light.) This would be the analogue of the original theological project of humanity reconstituting its divine nature, which Peirce secularised as the consensus theory of truth. Further considerations along these lines may be found here.

References

Baker, Erik and Naomi Oreskes. ‘Science as a Game, Marketplace or Both: A Reply to Steve Fuller.’ Social Epistemology Review and Reply Collective 6, no. 9 (2017): 65-69.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. ‘Recent Work in Social Epistemology.’ American Philosophical Quarterly 33: 149-66, 1996.

Fuller, Steve. Knowledge Management Foundations. Woburn MA: Butterworth-Heinemann, 2002.

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

Reisch, George A. ‘Did Kuhn Kill Logical Positivism?’ Philosophy of Science 58, no. 2 (1991): 264-277.

[1] One might also add the French word for ‘groping’, tâtonnement, common to Turgot’s and Walras’ understanding of how ‘general equilibrium’ is reached in the economy, as well as Theilard de Chardin’s conception of how God comes to be fully realized in the cosmos.

Author Information: Erik Baker and Naomi Oreskes, Harvard University, ebaker@g.harvard.edu, oreskes@fas.harvard.edu

Baker, Erik and Naomi Oreskes. “Science as a Game, Marketplace or Both: A Reply to Steve Fuller.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 65-69.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Ks

Image credit: United Nations Photo, via flickr

Steve Fuller’s response to our criticism of the “game” analogy in science studies comes at an opportune time.[1] One of us has recently published an exhaustive review of decades of ExxonMobil’s climate change communications, finding that while the vast majority of the oil company’s internal documents acknowledged the reality of anthropogenic climate change, only a vanishing minority of its public-facing statements expressed the same position, instead sowing doubt about the same scientific consensus its own in-house scientists overwhelmingly accepted.[2] This case study provides a helpful illustration of why we continue to defend our initial position, despite criticism from Fuller in two principal areas: truth and consensus, and political economy.

Truth and Consensus

Fuller describes our veritism (our insistence on talking about truth outside of scare quotes) as “gratuitous.” This complaint is hardly novel, and was expressed perhaps most influentially by Richard Rorty.[3] The basic idea, in all of its veneers, is that talk of truth furnishes philosophers and social scholars of science with no additional explanatory powers. “Truth” is instead a pointless metaphysical tack-on to an otherwise robust descriptive enterprise.

ExxonMobil’s sordid climate history provides a compelling counterexample to this assertion. Any answer to the question of why ExxonMobil continued to accept internally the same scientific claims it was disputing publicly (and that it had an obvious incentive to dispute) that does not invoke truth—or at least related notions as evidence and empirical adequacy—will be convoluted and tendentious. The best explanation of this fact is simply that the scientific consensus on climate change is largely correct, which is to say true.[4] It was in ExxonMobil’s interest both to understand the truth and to deny it publicly. If, as Fuller maintains, truth-seeking is wholly extraneous to the scientific enterprise, it is almost impossible to understand why ExxonMobil’s own scientists would perform research and publish papers antithetical to the company’s political and financial interests.

Veritism also helps to explain two broader features of scientific consensus that Fuller emphasizes. First, its formation in a social process. Fuller thinks that he has caught us in a contradiction when he observes us talking about “building” consensus. Hardly. On the contrary, it is difficult to understand the (social) process of consensus-building in science without a sense of truth-seeking as a constitutive feature. If scientists did not orient themselves in relation to a commonly accessible physical and social world about which the truth can, at least to some degree, be known, why would they put so much effort into persuading their colleagues and trying to achieve consensus? Why would they even consider such a thing possible? Indeed, what would the project of science be?

Non-cognitive goals do not bear the same explanatory weight. As the history of climate change denial illustrates, taking consensus and consensus-formation seriously is not a prerequisite for scientists to attain fame and fortune (and even credibility, in some circles). For an example of the kinds of practices that result when communities do not regard truth-seeking as feasible in a given realm, one only has to consider the common American proscription of politics and religion as conversation topics at “mixed company” dinner parties.

Veritism also helps to explain why scientific consensus occasionally comes undone. Fuller clearly believes that “the life expectancy of the theories around which scientists congregate at any given time” is quite low. (Here we wonder about the nature of this assertion: Does Fuller, perhaps, think it is true? If so, why is truth-seeking constitutive of certain social-scientific disciplines like STS, but not the natural sciences? One marvels at the conviction of some scholars in science studies that claims to speaking “truth to power” are illegitimate unless they are the ones making them.) We think that the evidence is more equivocal.[5]

Yet even granting Fuller’s claims—and acknowledging that non-cognitive social forces can obstruct consensus formation or cause a consensus to come undone—it is hard to fathom why new evidence should ever cause consensus to shift—and even harder to criticize an existing consensus—while banishing all talk of evidence, accuracy, correctness, and the notion that a conclusion can be shown to be true? Why would Earth scientists in the 1960s have bothered to re-open debate about continental drift? Fuller points out that evolutionary biologists have recently started to rethink some elements of the consensus around the twentieth-century modern synthesis, with some even calling for a new “extended evolutionary synthesis.” He clearly regards this development as salutary. But reference to evidence, facts, and truth—but often explicitly not intelligent design, it’s worth emphasizing—is at the core of the claims these scientists have made in promulgating and winning over some support for their theories.[6] If Fuller is right about science in general, he must, on pain of contradiction, find these same scientists whose work he welcomes to be under the grip of a profound and disturbing delusion.

The force of these two considerations together is why we do not and have never (contrary to what Fuller implies) held up consensus as a definitional criterion of truth, but rather as one of many possible heuristics to guide rational assessment (especially among non-experts) of the state of the science on a particular issue.[7] Other such heuristics include the existence of multiple methodological or disciplinary lines of evidence for the same conclusion. Or interested parties internally accepting the same scientific claims they publicly claim to doubt.

We think that developing grounds for such external assessment is crucial precisely because, as historians, we are acutely aware of the perishability of truth claims. How should we understand scientific knowledge as a basis for action and decision-making in light of this perishability? If parents only put their own children at risk by eschewing vaccination; if there were credible scientific evidence that vaccinations did cause autism; or if climate change were reversible, we might argue that deciding about these matters should be left to individuals. But none of these if conditions obtain. Intellectual positions that refuse to discriminate among these claims—or to discriminate only on social but not on cognitive grounds—put people at risk of real harm.

Do scientists have all the answers? Of course not. Should we have blind faith in science? Obviously not. Is the presence of expert consensus proof of truth? No again. But when scientists have come to agreement on a complicated matter like AIDS or evolution or climate change, it does indicate that they think that they have obtained some measure of truth about the issue, even if incomplete and subject to future revision. No climate scientist would claim that we know everything we could or should or might want to know about the climate system, but she would claim that we know enough to understand that if we don’t prevent further increases in atmospheric greenhouse gases, a lot of land will be lost and people will suffer. Consensus is a useful category of analysis because it tells us that scientific experts think that they have settled a matter, and that has to count for something. We are not arguing for a return to a naïve correspondence theory of truth—that would hardly be defensible given the past fifty years of work in philosophy of science—must less a naïve assumption that scientific experts are always right. But we are arguing for the need for a more vigorous re-inclusion of the cognitive dimensions of science in STS—including some notions of evidence, empirical adequacy, epistemic acceptability,[8] and truth without scare quotes.

Political Economy

The exigency of these considerations becomes even clearer in light of the concerns about economic and political power that we raised in our previous article. It is gratifying to see Fuller affirm the connection between the “game” view of science and neoliberal political economy for which we argued there. We hope that our colleagues who are sympathetic to Fuller’s epistemology but not his politics will attempt to identify where they think he has gone wrong in perceiving a relationship between the two.

Nonetheless, the case of ExxonMobil and climate change exemplifies the issue we take with Fuller’s assessment of the liberatory potential of the “free market thinkers” he extolls. Fuller rejects the idea of justice-motivated market interventions (such as a carbon tax, as we emphasized in our previous article) as obscuring the “real price” and its mysterious “educative function,” and he thinks that our defense of the scientific consensus on climate change places us in thrall to the “status quo.” But it is Fuller’s supposedly alternative “normative agenda” that supports the status quo, offering in practice a defense of a multi-billion dollar corporation, whose long-time CEO is now a cabinet member loyally serving one of the most reactionary presidents in United States history. This is precisely the bizarre situation that we described in our previous article: “STS, which often sees itself as championing the subaltern, has now in many cases become the intellectual defender of those who would crush the aspirations of ordinary people.”

Fuller characterizes our position as “neo-feudal,” (whatever that might mean) but it strains credulity to think that his position, capable of mustering little more than an apathetic shrug in the face of—for instance—the manipulation of science by oil money is really the one that stands up best to anti-democratic accretions of power. As we emphasized earlier, such inequalities—in income and wealth, and the political inequalities that subsequently ensue—are characteristic of capitalist economies,[9] and so it is perhaps unsurprising that the most loyal defenders of capitalism have not denied that fact but rather embraced and justified it. From Ludwig von Mises’ 1927 judgment that fascism was at one point a necessary evil to combat communism,[10] to the material and intellectual support of Wilhelm Röpke (the most influential of the “ordoliberals” that Fuller especially praises) for the South African apartheid regime,[11] to Robert Nozick’s influential right-libertarian condemnation of wealth redistribution and democracy alike in his Anarchy, State, and Utopia (1974),[12] to twenty-first-century attacks on democracy from Austrian economists at institutions like the Mercatus Center at George Mason University and the Ludwig von Mises Institute in Alabama,[13] the “freedom” that the neoliberals—and now Fuller—prize so dearly has typically meant the freedom of the few to oppress the many, or at least to place their needs and concerns above all others.

At least Fuller, with his modified ordoliberalism, seems to agree with us that some “normative agenda” must indeed be brought to bear in both economics and science. But two things are worth noting. First, what is such a normative agenda if not one of the “transcendent conceptions of truth and value” that Austrian wisdom is supposed to debunk? After all, the Bloorian analogy to which we initially drew attention was not just about “social constructivism” in general but specifically about Wittgenstein. And we read earlier in Fuller’s response his assessment of the Wittgensteinian “ordinary language” thinkers: they are “advertised as democratising but in practice they are parochialising.” Indeed. But with his later full-throated embrace of Bloor-cum-Mises, it looks awfully like he is trying to have his Wittgenstein and mock it too.

Second, it is odd to think that if a normative agenda is to be brought to bear on science, it ought to be of an utterly non-cognitive order, like neoliberal “freedom.” On the contrary, truth (along with evidence, facts, and other words science studies scholars tend to relegate to scare quotes) is a far more plausible choice for one of a potential plurality of regulative ideals for an enterprise that, after all, does have an obviously cognitive function. Ironically, Fuller’s insistence that freedom matters for science but truth does not reeks of the rigorous discrimination between the normative and the empirical that much of the best work in science studies has undermined. Both are necessary: besides the issue of de facto alignment with status quo power, once more we see in Fuller’s response how the adoption of the “game” view vitiates the critiques of its proponents even on their own terms. Fuller, despite his obvious sympathies, still refuses to say unequivocally that mainstream scientists should surrender to the superior arguments of their intelligent design opponents. He instead rests assured that the invisible hand of a well-constructed scientific marketplace will eventually accomplish the shift in opinion he wishes to see.

We invite Fuller to join us in abandoning the game or marketplace view of science and talking openly about truth. He will find it possible to criticize the “Darwinists” much more vociferously that way. But, of course, he would then run the risk of actually being wrong, instead of merely incoherent.

[1] Erik Baker and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10; Steve Fuller, “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

[2] Geoffrey Supran and Naomi Oreskes, “Assessing ExxonMobil’s climate change communications (1977–2014),” Environmental Research Letters 12, no. 8 (2017).

[3] Richard Rorty, Contingency, Irony, and Solidarity (Cambridge University Press, 1989).

[4] Or that it conforms to the real (objective) world, to once again employ Helen Longino’s account of truth in her The Fate of Knowledge (Princeton University Press, 2001).

[5] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Faye Flam, “Why Scientific Consensus Is Worth Taking Seriously,” Bloomberg, May 22, 2017.

[6] See for instance Massimo Pigliucci, Evolution: The Extended Synthesis (MIT Press, 2010).

[7] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Naomi Oreskes, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?” in Joseph F. C. DiMento and Pamela Doughman, eds., Climate Change: What It Means for Us, Our Children, and Our Grandchildren (MIT Press, 2007), pp. 65-99. This is where we depart from some scholars associated with pragmatism and Habermas. Readers will note that, once more contrary to Fuller’s implication, these scholars comprise only one of the many diverse and sometimes internally disputatious traditions we cited as inspiration in our earlier article.

[8] As suggested by Longino, Fate of Knowledge, 2001.

[9] The now-canonical study on this question is Thomas Piketty, Capital in the Twenty-First Century (Harvard/Belknap, 2013).

[10] Since the passage is controversial, we provide it in full and let the reader judge for themselves: “It cannot be denied that fascism and all similar efforts at dictatorship are full of the best intentions and that their intervention has, for the moment, rescued European civilization. The merit that fascism has thereby acquired for itself will go on living in history eternally. But the political program that has brought salvation in this moment is not of the sort whose sustained maintenance could promise success. Fascism was a makeshift of the moment; to consider it anything more would be a disastrous mistake.” Ludwig von Mises, Liberalismus, 1927 (translation E.B.).

[11] Quinn Slobodian, “The World Economy and the Color Line: Wilhelm Röpke, Apartheid, and the White Atlantic,” GHI Bulletin Supplement 10 (2014).

[12] Robert Nozick, Anarchy, State, and Utopia (Basic Books, 1974), especially chapters 8 and 9. Nozick himself retreated somewhat on both positions later in his life (in his The Examined Life, Simon and Schuster, 1990, ch. 25), but current Mont Pelerin Society president Peter Boettke still preaches ASU as exemplary of the Austrian tradition (https://goo.gl/8nqqPo).

[13] See for instance Bryan Caplan, Myth of the Rational Voter (Princeton University Press, 2007); Hans-Hermann Hoppe, Democracy: The God That Failed (Ludwig von Mises Institute, 2001); for a secondary-source account see Nancy MacLean, Democracy in Chains (Viking, 2017).

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “How to Study: Roam, Record and Rehearse.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 62-64.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Kf

Please refer to:

Image credit: Jeffrey Smith, via flickr

My most successful study skill is one that I picked up very early in life—and perhaps is difficult to adopt after a certain age. Evidence of its success is that virtually everything I read appears to be hyperlinked to something in my memory. In practice, this means that I can randomly pick up a book and within fifteen minutes I can say something interesting about it—that is, more than summarize its contents. In this way, I make the book ‘my own’ in the sense of assigning it a place in my cognitive repertoire, to which I can then refer in the future.

There are three features to this skill. One is sheer exposure to many books. Another is taking notes on them. A third is integrating the notes into one’s mode of being, so that they function as a script in search of a performance. In sum, I give you the new 3 Rs: Roam, Record and Rehearse.

Roam

Let’s start with Roam. I’ve always understood reading as the most efficient means to manufacture equipment for the conduct of life. It is clearly more efficient than acquiring personal experience. But that’s a relatively superficial take on the situation. A better way of putting it is that reading should be seen as itself a form of personal experience. In the first instance, this means taking seriously the practice of browsing. By ‘browsing’ I mean forcing yourself to encounter a broader range of possibilities than you imagined was necessary for your reading purposes.

Those under the age of twenty may not appreciate that people used to have to occupy a dedicated physical space—somewhere in a bookshop or a library—to engage in ‘browsing’. It was an activity which forced encounters of works both ‘relevant’ and ‘irrelevant’ to one’s interests. Ideally, at least in terms of one’s own personal intellectual development, browsing would challenge the neatness of this distinction, as one came across books that turned out to be more illuminating than expected. To be sure, ‘browsing’ via computerized search engines still allow for that element of serendipity, as anyone experienced with Google or Amazon will know. Nevertheless, browser designers normally treat such a feature to be a flaw in the programme that should be remedied in the next iteration, so that you end up finding more items like the ones you previous searched for.

As a teenager in New York City in the 1970s I spent my Sunday afternoons browsing through the two biggest used bookshops in Greenwich Village, Strand and Barnes & Noble. Generally speaking, these bookshops were organized according to broad topics, somewhat like a library. However, certain sections were also organized according to book publishers, which was very illuminating. In this way, I learned, so to speak, ‘to judge a book by its cover’.  Publishing houses tend to have distinctive styles that attract specific sorts of authors. In this way, I was alerted to differences between ‘left’ and ‘right’ in politics, as well as ‘high’ and ‘low’ in culture. Taken together, these differences offer dimensions for mapping knowledge in ways that cut across academic disciplinary boundaries.

There is a more general lesson here: If you spend a lot of time browsing, you tend to distrust the standard ways in which books—or information, more generally—is categorized.

Record

Back in New York I would buy about five used books at a time and read them immediately, annotating the margins of the pages. However, I quickly realized that this was not an effective way of ‘making the books my own’. So I shifted to keeping notebooks, in which I quite deliberately filtered what I read into something I found meaningful and to which I could return later. Invariably this practice led me to acquire idiosyncratic memories of whatever I read, since I was basically rewriting the books I read for my own purposes.

In my university days, I learned to call what I was doing ‘strong reading’. And I continue it to this day. Thus, in my academic writing, when I make formal reference to other works, I am usually acknowledging an inspiration—not citing an authority—for whatever claim I happen to be making. My aim is to take personal responsibility for what I say. I dislike the academic tendency to obscure the author’s voice in a flurry of scholarly references which simply repeat connections that could be made by a fairly standard Google search of the topic under discussion.

Rehearse

Now let’s move from Record to Rehearse. In a sense, rehearsal already begins when you shift from writing marginalia to full-blown notebook entries insofar as the latter forces you to reinvent what it is that you originally found compelling in the noteworthy text. Admittedly the cut-and-paste function in today’s computerized word processing programmes can undermine this practice, resulting in ‘notes’ that look more like marginal comments.

However, I engage in rehearsal even with texts of which I am the original author. You can keep yourself in a rehearsal mode by working on several pieces of writing (or creative projects) at once without bringing any of them to completion. In particular, you should stop working just when you are about to reach a climax in your train of thought. The next time you resume work you will then be forced to recreate the process that led you to that climactic point. Often you will discover that the one conclusion toward which you thought you had been heading turns out to have been a mirage. In fact, your ‘climax’ opens up a new chapter with multiple possibilities ahead.

Assuaging Alienation

I realize that some people will instinctively resist what I just prescribed. It seems to imply that no work should ever end, which is a nightmare for anyone who needs to produce something to a specific schedule in order to earn living!  And of course, I myself have authored more than twenty books. However, to my mind these works always end arbitrarily and even abruptly. (And my critics notice this!) Nevertheless, precisely because I do not see them as ‘finished’, they continue to live in my own mind as something to which I can always return. They become part of the repertoire that I always rehearse, which in turn defines the sort of person I am.

Perhaps a good way to see what I am recommending is as a solution to the problem of ‘alienation’ which Karl Marx famously identified. Alienation arises because industrial workers in capitalist regimes have no control over the products of their labour. Once the work is done, it is sold to people with whom they have no contact and over whom they have no control. However, alienation extends to intellectual life as well, as both journalists and academics need to write quite specific self-contained pieces that are targeted at clearly defined audiences. Under the circumstances, there is a tendency to write in a way that enables the author to detach him- or her- self from, if not outright forget, what they have written once it is published. Often this tendency is positively spun by saying that a piece of writing makes its point better than its author could ever do in person.

My own view is quite the opposite. You should treat the texts you write more like dramatic scripts or musical scores than like artworks. They should be designed to be performed in many different ways, not least by the original composer. There should always be an element of incompleteness that requires someone to bring the text alive. In short, it should always be in need of rehearsal. Taken together, Roam, Record and Rehearse has been a life strategy which has enabled me to integrate a wide range of influences into a dynamic source of inspiration and creativity that I understand to be very much my own.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “What are You Playing At? On the Use and Abuse of Games in STS.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 39-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3JC

Please refer to:

Image credit: PGuiri, via flickr

What follows is an omnibus reply to various pieces that have been recently written in response to Fuller (2017), where I endorsed the post-truth idea of science as a game—an idea that I take to have been a core tenet of science and technology studies (STS) from its inception. The article is organized along conceptual lines, taking on Phillips (2017), Sismondo (2017) and Baker and Oreskes (2017) in roughly that order, which in turn corresponds to the degree of sympathy (from more to less) that the authors have with my thesis.

What It Means to Take Games Seriously

Amanda Phillips (2017) has written a piece that attempts to engage with the issues I raised when I encouraged STS to own the post-truth condition, which I take to imply that science in some deep sense is a ‘game’. What she writes is interesting but a bit odd, since in the end she basically proposes STS’s current modus operandi as if it were a new idea.  But we’ve already seen Phillips’ future, and it doesn’t work. But she’s far from alone, as we shall see.

On the game metaphor itself, some things need to be said. First of all, I take it that Phillips largely agrees with me that the game metaphor is appropriate to science as it is actually conducted. Her disagreement is mainly with my apparent recommendation that STS follow suit. She raises the introduction of the mortar kick into US football, which stays within the rules but threatens player safety. This leads her to conclude that the mortar kick debases/jeopardizes the spirit of the game. I may well agree with her on this point, which she wishes to present as akin to a normative stance appropriate to STS.  However, I cannot tell for sure, just given the evidence she provides. I’d also like to see whether she would have disallowed past innovations that changed the play of the game—and, if so, which ones. In other words, I need a clearer sense of what she takes to be the ‘spirit of the game’, which involves inter alia judgements about tolerable risks over a period of time.

To be sure, judicial decisions normally have this character. Sometimes judges issue ‘landmark decisions’ which may invalidate previous judges’ rulings but, in any case, set a precedent on the basis of which future decisions should be made. Bringing it back to the case at hand, Phillips might say that football has been violating its spirit for a long time and that not only should the mortar kick be prohibited but so too some other earlier innovations. (In US Constitutional law, this would be like the history of judicial interpretation of citizen rights following the passage of the Fourteenth Amendment, at least starting with Brown v. Board Education.) Of course, Phillips might instead give a more limited ruling that simply claims that the mortar kick is a step too far in the evolution of the game, which so far has stayed within its spirit. Or, she might simply judge the mortar kick to be within the spirit of the game, full stop. The arguments used to justify any of these decisions would be an exercise in elucidating what the ‘spirit of the game’ means.

I do not wish to be persnickety but to raise a point about what it means to think about science as a game. It means, at the very least, that science is prima facie an autonomous activity in the sense of having clear boundaries. Just as one knows when one is playing or not playing football, one knows when one is or is not doing science.  Of course, the impact

that has on the rest of society is an open question. For example, once dedicated schools and degree programmes were developed to train people in ‘science’ (and here I mean the term in its academically broadest sense, Wissenschaft), especially once they acquired the backing and funding of nation-states, science became the source of ultimate epistemic authority in virtually all policy arenas. This was something that really only began to happen in earnest in the second half of the nineteenth century.

Similarly, one could imagine a future history of football, perhaps inspired by the modern Olympics, in which larger political units acquire an interest in developing the game as a way of resolving their own standing problems that might otherwise be handled with violence, sometimes on a mass scale. In effect, the Olympics would be a regularly scheduled, sublimated version of a world war. In that possible world, football—as one of the represented sports—would come to perform the functions for which armed conflict is now used. Here sports might take inspiration from the various science ‘races’ in which the Cold War was conducted—notably the race to the Moon—was a highly successful version of this strategy in real life, as it did manage to avert a global nuclear war. Its intellectual residue is something that we still call ‘game theory’.

But Phillips’ own argument doesn’t plumb the depths of the game metaphor in this way. Instead she has recourse to something she calls, inspired by Latour (2004), a ‘collective multiplicity of critical thought’. She also claims that STS hasn’t followed Latour on this point. As a matter of fact, STS has followed Latour almost religiously on this point, which has resulted in a diffusion of critical impact. The field basically amplifies consensus where it exists, showing how it has been maintained, and amplifies dissent where it exists, similarly showing how it has been maintained. In short, STS is simply the empirical shadow of the fields it studies. That’s really all that Latour ever meant by ‘following the actors’.

People forget that this is a man who follows Michel Serres in seeing the parasite as a role model for life (Serres and Latour 1995; cf. Fuller 2000: chap. 7). If STS seems ‘critical’, that’s only an unintended consequence of the many policy issues involving science and technology which remain genuinely unresolved. STS adds nothing to settle the normative standing of these matters. It simply elaborates them and in the process perhaps reminds people of what they might otherwise wish to forget or sideline. It is not a worthless activity but to accord it ‘critical’ in any meaningful sense would be to do it too much justice, as Latour (2004) himself realizes.

Have STSers Always Been Cheese-Eating Surrender Monkeys?

Notwithstanding the French accent and the Inspector Clouseau demeanour, Latour’s modus operandi is reminiscent of ordinary language philosophy, that intellectual residue of British imperialism, which in the mid-twentieth century led many intelligent people to claim that the sophisticated English practiced in Oxbridge common rooms cut the world at the joints. Although Ernest Gellner (1959) provided the consummate take-down of the movement—to much fanfare in the media at the time—ordinary language philosophy persisted well into the 1980s, along the way influencing the style of ethnomethodology that filtered into STS. (Cue the corpus of Michael Lynch.)

Ontology was effectively reduced to a reification of the things that the people in the room were talking about and the relations predicated of them. And where the likes of JL Austin and PF Strawson spoke of ‘grammatical usage’, Latour and his followers refer to ‘semiotic network’, largely to avoid the anthropomorphism from which the ordinary language philosophers had suffered—alongside their ethnocentrism. Nevertheless, both the ordinary language folks and Latour think they’re doing an empirically informed metaphysics, even though they’re really just eavesdropping on themselves and the people in whose company they’ve been recently kept. Latour (1992) is the classic expression of STS self-eavesdropping, as our man Bruno meditates on the doorstop, the seatbelt, the key and other mundane technologies with which he can never quite come to terms, which results in his life becoming one big ethnomethodological ‘breaching experiment’.

All of this is a striking retreat from STS’s original commitment to the Edinburgh School’s ‘symmetry principle’, which was presented as an intervention in epistemology rather than ontology. In this guise STS was seen as threatening rather than merely complementing the established normative order because the symmetry principle, notwithstanding its vaunted neutrality, amounted to a kind of judgemental relativism, whereby ‘winning’ in science was downgraded to a contingent achievement, which could have been—and might still be—reversed under different circumstances. This was the spirit in which Shapin and Schaffer (1985) appeared to be such a radical book: It had left the impression that the truth is no more than the binding outcome of a trial of people and things: that is, a ‘game’ in its full and demystified sense.

While I have always found this position problematic as an end in itself, it is nonetheless a great opening move to acquire an alternative normative horizon from that offered by the scientific establishment, since it basically amounts to an ‘equal time’ doctrine in an arena where opponents are too easily mischaracterised and marginalised, if not outright silenced by being ‘consigned to the dustbin of history’. Indeed, as Kuhn had recognized, the harder the science, the clearer the distinction between the discipline and its history.

However, this normative animus began to disappear from STS once Latour’s actor-network theory became the dominant school around the time of the Science Wars in the mid-1990s. It didn’t take long before STS had become supine to the establishment, exemplified by Latour (2004)’s uncritical acceptance of the phrase ‘artificially maintained controversies’, which no doubt meets with the approval of Eric Baker and Naomi Oreskes (Baker and Oreskes 2017). For my own part, when I first read Latour (2004), I was reminded of Donald Rumsfeld’s phrase from the same period, albeit in the context of France’s refusal to support the Iraq War: ‘cheese-eating surrender monkey’.

Nevertheless, Latour’s surrender has stood STS in good stead, rendering it a reliable reflector of all that it observes. But make no mistake: Despite the radical sounding rhetoric of ‘missing masses’ and ‘parliament of things’, STS in the Latourian moment follows closely in the footsteps of ordinary language philosophy, which enthusiastically subscribed to the Wittgensteinian slogan of ‘leaving the world alone’. The difference is that whereas the likes of Austin and Strawson argued that our normal ways of speaking contain many more insights into metaphysics than philosophers had previously recognized, Latour et al. show that taking seriously what appears before our eyes makes the social world much more complicated than sociologists had previously acknowledged. But the lesson is the same in both cases: Carry on treating the world as you find it as ultimate reality—simply be more sensitive to its nuances.

It is worth observing that ordinary language philosophy and actor-network theory, notwithstanding their own idiosyncrasies and pretensions, share a disdain for a kind of philosophy or sociology, respectively, that adopts a ‘second order’ perspective on its subject matter. In other words, they were opposed to what Strawson called ‘revisionary metaphysics’, an omnibus phrase that was designed to cover both German idealism and logical positivism, the two movements that did the most to re-establish the epistemic authority of academics in the modern era. Similarly, Latour’s hostility to a science of sociology in the spirit of Emile Durkheim is captured in the name he chose for his chair at Sciences Po, Gabriel Tarde, the magistrate who moved into academia and challenged Durkheim’s ontologically closed sense of sociology every step of the way. In both cases, the moves are advertised as democratising but in practice they’re parochialising, since those hidden nuances and missing masses are supposedly provided by acts of direct acquaintance.

Cue Sismondo (2017), who as editor of the journal Social Studies of Science operates in a ‘Latour Lite’ mode: that is, all of the method but none of the metaphysics. First, he understands ‘post-truth’ in the narrowest possible context, namely, as proposed by those who gave the phenomenon its name—and negative spin—to make it 2016 Oxford English Dictionary word of the year. Of course, that’s in keeping with the Latourian dictum of ‘Follow the agents’. But it is also to accept the agents’ categories uncritically, even if it means turning a blind eye to STS’s own role in promoting the epistemic culture responsible for ‘post-truth’, regardless of the normative value that one ultimately places on the word.

Interestingly, Sismondo is attacked on largely the same grounds by someone with whom I normally disagree, namely, Harry Collins (Collins, Evans, Weinel 2017). Collins and I agree that STS naturally lends itself to a post-truth epistemology, a fact that the field avoids at its peril. However, I believe that STS should own post-truth as a feature of the world that our field has helped to bring about—to be sure, not ex nihilo but by creatively deploying social and epistemological constructivism in an increasingly democratised context. In contrast, while Collins concedes that STS methods can be used even by our political enemies, he calls on STS to follow his own example by using its methods to demonstrate that ‘expert knowledge’ makes an empirical difference to the improvement of judgement in a variety of arenas. As for the politically objectionable uses of STS methods, here Collins and I agree that they are worth opposing but an adequate politics requires a different kind of work from STS research.

In response to all this, Sismondo retreats to STS’s official self-understanding as a field immersed the detailed practices of all that it studies—as opposed to those post-truth charlatans who simply spin words to create confusion. But the distinction is facile and perhaps disingenuous. The clearest manifestation that STS attends to the details of technoscientific practice is the complexity—or, less charitably put, complication—of its own language.  The social world comes to be populated by so many entities, properties and relations simply because STS research is largely in business of naming and classifying things, with an empiricist’s bias towards treating things that appear different to be really different. It is this discursive strategy that results in the richer ontology that one typically finds in STS articles, which in turn is supposed to leave the reader with the sense that the STS researcher has a deeper and more careful understanding of what s/he has studied. But in the end, it is just a discursive strategy, not a mathematical proof. There is a serious debate to be had about whether the field’s dedication to detail—‘ontological inventory work’—is truly illuminating or obfuscating. However, it does serve to establish a kind of ‘expertise’ for STS.

Why Science Has Never Had Need for Consensus—But Got It Anyway

My double question to anyone who wishes to claim a ‘scientific consensus’ on anything is on whose authority and on what basis such a statement is made. Even that great defender of science, Karl Popper, regarded scientific facts as no more than conventions, agreed mainly to mark temporary settlements in an ongoing journey. Seen with a rhetorician’s eye, a ‘scientific consensus’ is demanded only when scientific authorities feel that they are under threat in a way that cannot be dismissed by the usual peer review processes. ‘Science’ after all advertises itself as the freest inquiry possible, which suggests a tolerance for many cross-cutting and even contradictory research directions, all compatible with the current evidence and always under review in light of further evidence. And to a large extent, science does demonstrate this spontaneous embrace of pluralism, albeit with the exact options on the table subject to change. To be sure, some options are pursued more vigorously than others at any given moment. Scientometrics can be used to chart the trends, which may make the ‘science watcher’ seem like a stock market analyst. But this is more ‘wisdom of crowds’ stuff than a ‘scientific consensus’, which is meant to sound more authoritative and certainly less transient.

Indeed, invocations of a ‘scientific consensus’ become most insistent on matters which have two characteristics, which are perhaps necessarily intertwined but, in any case, take science outside of its juridical comfort zone of peer review: (1) they are inherently interdisciplinary; (2) they are policy-relevant. Think climate change, evolution, anything to do with health. A ‘scientific consensus’ is invoked on just these matters because they escape the ‘normal science’ terms in which peer review operates. To a defender of the orthodoxy, the dissenters appear to be ‘changing the rules of science’ simply in order to make their case seem more plausible. However, from the standpoint of the dissenter, the orthodoxy is artificially restricting inquiry in cases where reality doesn’t fit its disciplinary template, and so perhaps a change in the rules of science is not so out of order.

Here it is worth observing that defenders of the ‘scientific consensus’ tend to operate on the assumption that to give the dissenters any credence would be tantamount to unleashing mass irrationality in society. Fortified by the fledgling (if not pseudo-) science of ‘memetics’, they believe that an anti-scientific latency lurks in the social unconscious. It is a susceptibility typically fuelled by religious sentiments, which the dissenters threaten to awaken, thereby reversing all that modernity has achieved.

I can’t deny that there are hints of such intent in the ranks of dissenters. One notorious example is the Discovery Institute’s ‘Wedge document’, which projected the erosion of ‘methodological naturalism’ as the ‘thin edge of the wedge’ to return the US to its Christian origins. Nevertheless, the paranoia of the orthodoxy underestimates the ability of modernity—including modern science—to absorb and incorporate the dissenters, and come out stronger for it. The very fact that intelligent design theory has translated creationism into the currency of science by leaving out the Bible entirely from its argumentation strategy should be seen as evidence for this point. And now Darwinists need to try harder to defeat it, which we see in their increasingly sophisticated refutations, which often end up with Darwinists effectively conceding points and simply admitting that they have their own way of making their opponents’ points, without having to invoke an ‘intelligent designer’.

In short, my main objection to the concept of a ‘scientific consensus’ is that it is epistemologically oversold. It is clearly meant to carry more normative force than whatever happens to be the cutting edge of scientific fashion this week. Yet, what is the life expectancy of the theories around which scientists congregate at any given time?  For example, if the latest theory says that the planet is due for climate meltdown within fifty years, what happens if the climate theories themselves tend to go into meltdown after about fifteen years? To be sure, ‘meltdown’ is perhaps too strong a word. The data are likely to remain intact and even be enriched, but their overall significance may be subject to radical change. Moreover, this fact may go largely unnoticed by the general public, as long as the scientists who agreed to the last consensus are also the ones who agree to the next consensus. In that case, they can keep straight their collective story of how and why the change occurred—an orderly transition in the manner of dynastic succession.

What holds this story together—and is the main symptom of epistemic overselling of scientific consensus—is a completely gratuitous appeal to the ‘truth’ or ‘truth-seeking’ (aka ‘veritism’) as somehow underwriting this consensus. Baker and Oreskes’ (2017) argument is propelled by this trope. Yet, interestingly early on even they refer to ‘attempts to build public consensus about facts or values’ (my emphasis). This turn of phrase comports well with the normal constructivist sense of what consensus is. Indeed, there is nothing wrong with trying to align public opinion with certain facts and values, even on the grand scale suggested by the idea of a ‘scientific consensus’. This is the stuff of politics as usual. However, whatever consensus is thereby forged—by whatever means and across whatever range of opinion—has no ‘natural’ legitimacy. Moreover, it neither corresponds to some pre-existent ideal of truth nor is composed of some invariant ‘truth stuff’ (cf. Fuller 1988: chap. 6). It is a social construction, full stop. If the consensus is maintained over time and space, it will not be due to its having been blessed and/or guided by ‘Truth’; rather it will be the result of the usual social processes and associated forms of resource mobilization—that is, a variety of external factors which at crucial moments impinge on the play of any game.

The idea that consensus enjoys some epistemologically more luminous status in science than in other parts of society (where it might be simply dismissed as ‘groupthink’) is an artefact of the routine rewriting of history that scientists do to rally their troops. As Kuhn long ago observed, scientists exaggerate the degree of doctrinal agreement to give forward momentum to an activity that is ultimately held together simply by common patterns of disciplinary acculturation and day-to-day work practices. Nevertheless, Kuhn’s work helped to generate the myth of consensus. Indeed, in my Cambridge days studying with Mary Hesse (circa 1980), the idea that an ultimate consensus on the right representation of reality might serve as a transcendental condition for the possibility of scientific inquiry was highly touted, courtesy of the then fashionable philosopher Jürgen Habermas, who flattered his Anglophone fans by citing Charles Sanders Peirce as his source for the idea. Yet even back then I was of a different mindset.

Under the influence of Foucault, Derrida and social constructivism (which were circulating in more underground fashion), as well as what I had already learned about the history of science (mainly as a student of Loren Graham at Columbia), I deemed the idea of a scientific consensus to reflect a secular ‘god of the gaps’ style of wishful thinking. Indeed I devoted a chapter of my Ph.D. on the ‘elusiveness’ of consensus in science, which was the only part of the thesis that I incorporated in Social Epistemology (Fuller 1988: chap. 9). It is thus very disappointing to see Baker and Oreskes continuing to peddle Habermas’ brand of consensus mythology, even though for many of us it had fallen still born from the presses more than three decades ago.

A Gaming Science Is a Free Science

Baker and Oreskes (2017) are correct to pick up on the analogy drawn by David Bloor between social constructivism’s scepticism with regard to transcendent conceptions of truth and value and the scepticism that the Austrian school of economics (and most economists generally) show to the idea of a ‘just price’, understood as some normative ideal that real prices should be aiming toward. Indeed, there is more than an analogy here. Alfred Schutz, teacher of Peter Berger and Thomas Luckmann of The Social Construction of Reality fame, was himself a member of the Mises Circle in Vienna, having been trained by him the law faculty. Market transactions provided the original template for the idea of ‘social construction’, a point that is already clear in Adam Smith.

However, in criticizing Bloor’s analogy, Baker and Oreskes miss a trick: When the Austrians and other economists talk about the normative standing of real prices, their understanding of the market is somewhat idealized; hence, one needs a phrase like ‘free market’ to capture it. This point is worth bearing in mind because it amounts to a competing normative agenda to the one that Baker and Oreskes are promoting. With the slow ascendancy of neo-liberalism over the second half of the twentieth century, that normative agenda became clear—namely, to make markets free so that real prices can prevail.

Here one needs to imagine that in such a ‘free market’ there is a direct correspondence between increasing the number of suppliers in the market and the greater degree of freedom afforded to buyers, as that not only drives the price down but also forces buyers to refine their choice. This is the educative function performed by markets, an integral social innovation in terms of the Enlightenment mission advanced by Smith, Condorcet and others in the eighteenth century (Rothschild 2002). Markets were thus promoted as efficient mechanisms that encourage learning, with the ‘hand’ of the ‘invisible hand’ best understood as that of an instructor. In this context, ‘real prices’ are simply the actual empirical outcomes of markets under ‘free’ conditions. Contra Baker and Oreskes, they don’t correspond to some a priori transcendental realm of ‘just prices’.

However, markets are not ‘free’ in the requisite sense as long as the state strategically blocks certain spontaneous transactions, say, by placing tariffs on suppliers other than the officially licensed ones or by allowing a subset of market agents to organize in ways that enable them to charge tariffs to outsiders who want access. In other words, the free market is not simply about lower taxes and fewer regulations. It is also about removing subsidies and preventing cartels. It is worth recalling that Adam Smith wrote The Wealth of Nations as an attack on ‘mercantilism’, an economic system not unlike the ‘socialist’ ones that neo-liberalism has tried to overturn with its appeal to the ‘free market’. In fact, one of the early neo-liberals (aka ‘ordo-liberals’), Alexander Rüstow, coined the phrase ‘liberal interventionism’ in the 1930s for the strong role that he saw for the state in freeing the marketplace, say, by breaking up state-protected monopolies (Jackson 2009).

Capitalists defend private ownership only as part of the commodification of capital, which in turn, allows trade to occur. Capitalists are not committed to an especially land-oriented approach to private property, as in feudalism, which through, say, inheritance laws restricts the flow of capital in order to stabilise the social order. To be sure, capitalism requires that traders know who owns what at any given time, which in turn supports clear ownership signals. However, capitalism flourishes only if the traders are inclined to part with what they already own to acquire something else. After all, wealth cannot grow if capital doesn’t circulate. The state thus serves capitalism by removing the barriers that lead people to accept too easily their current status as an adaptive response to situations that they regard as unchangeable. Thus, liberalism, the movement most closely aligned with the emerging capitalist sensibility, was originally called ‘radical’—from the Latin for ‘root’—as it promised to organize society according to humanity’s fundamental nature, the full expression of which was impeded by existing regimes, which failed to allow everyone what by the twentieth century would be called ‘equal opportunity’ in life (Halevy 1928).

I offer this more rounded picture of the normative agenda of free market thinkers because Baker and Oreskes engage in a rhetorical sleight of hand associated with the capitalists’ original foes, the mercantilists. It involves presuming that the public interest is best served by state authorised producers (of whatever). Indeed, when one speaks of the early modern period in Europe as the ‘Age of Absolutism’, this elision of the state and the public is an important part of what is meant. True to its Latin roots, the ‘state’ is the anchor of stability, the stationary frame of reference through which everything else is defined. Here one immediately thinks of Newton, but metaphysically more relevant was Hobbes whose absolutist conception of the state aimed to incarnate the Abrahamic deity in human form, the literal body of which is the body politic.

Setting aside the theology, mercantilism in practice aimed to reinvent and rationalize the feudal order for the emerging modern age, one in which ‘industry’ was increasingly understood as not a means to an end but an end in itself—specifically, not simply a means to extract the fruits of nature but an expression of human flourishing. Thus, political boundaries on maps started to be read as the skins of superorganisms, which by the nineteenth century came to be known as ‘nation-states’. In that case, the ruler’s job was not simply to keep the peace over what had been largely self-managed tracts of land, but rather to ‘organize’ them so that they functioned as a single productive unit, what we now call the ‘economy’, whose first theorization was as ‘physiocracy’. The original mercantilist policy involved royal licenses that assigned exclusive rights to a ‘domain’ understood in a sense that was not restricted to tracts of land, but extended to wealth production streams in general. To be sure, over time these rights were attenuated into privileges and subsidies, which allowed for some competition but typically on an unequal basis.

In contrast, capitalism’s ‘liberal’ sensibility was about repurposing the state’s power to prevent the rise of new ‘path dependencies’ in the form of, say, a monopoly in trade based on an original royal license renewed in perpetuity, which would only serve to reduce the opportunities of successive generations. It was an explicitly anti-feudal policy. The final frontier to this policy sensibility is academia, which has long been acknowledged to be structured in terms of what Robert Merton called the principle of ‘cumulative advantage’, the sources of which are manifold and, to a large extent, mutually reinforcing. To list just a few: (1) state licenses issued to knowledge producers, starting with the Charter of the Royal Society of London, which provided a perpetually protected space for a self-organizing community to do as they will within originally agreed constraints; (2) Kuhn-style paradigm-driven normal science, which yields to a successor paradigm only out of internal collapse, not external competition; (3) the anchoring effect of early academic training on subsequent career advancement, ranging from jobs to grants; (4) the evaluation of academic work in terms of a peer review system whose remit extends beyond catching errors to judging relevance to preferred research agendas; (5) the division of knowledge into ‘fields’ and ‘domains’, which supports a florid cartographic discourse of ‘boundary work’ and ‘boundary maintenance’.

The list could go on, but the point is clear to anyone with eyes to see: Even in these neo-liberal times, academia continues to present its opposition to neo-liberalism in the sort of neo-feudal terms that would have pleased a mercantilist. Lineage is everything, whatever the source of ancestral entitlement. Merton’s own attitude towards academia’s multiple manifestations of ‘cumulative advantage’ seemed to be one of ambivalence, though as a sociologist he probably wasn’t sufficiently critical of the pseudo-liberal spin put on cumulative advantage as the expression of the knowledge system’s ‘invisible hand’ at work—which seems to be Baker and Oreskes’ default position as defenders of the scientific status quo. However, their own Harvard colleague, Alex Csiszar (2017) has recently shown that Merton recognized that the introduction of the scientometrics in the 1960s—in the form of the Science Citation Index—made academia susceptible to a tendency that he had already identified in bureaucracies, ‘goal displacement’, whereby once a qualitative goal is operationalized in terms of a quantitative indicator, there is an incentive to work toward the indicator, regardless of its actual significance for achieving the original goal. Thus, the cumulative effect of high citation counts become surrogates for ‘truth’ or some other indicator-transcendent goal. In this real sense, what is at best the wisdom of the scientific crowd is routinely mistaken for an epistemically luminous scientific consensus.

As I pointed out in Fuller (2017), which initiated this recent discussion of ‘science as game’, a great virtue of the game idea is its focus on the reversibility of fortunes, as each match matters, not only to the objective standing of the rival teams but also to their subjective sense of momentum. Yet, from their remarks about intelligent design theory, Baker and Oreskes appear to believe that the science game ends sooner than it really does: After one or even a series of losses, a team should simply pack it in and declare defeat. Here it is worth recalling that the existence of atoms and the relational character of space-time—two theses associated with Einstein’s revolution in physics—were controversial if not deemed defunct for most of the nineteenth century, notwithstanding the problems that were acknowledged to exist in fully redeeming the promises of the Newtonian paradigm. Indeed, for much of his career, Ernst Mach was seen as a crank who focussed too much on the lost futures of past science, yet after the revolutions in relativity and quantum mechanics his reputation flipped and he became known for his prescience. Thus, the Vienna Circle that spawned the logical positivists was named in Mach’s honour.

Similarly intelligent design may well be one of those ‘controversial if not defunct’ views that will be integral to the next revolution in biology, since even biologists whom Baker and Oreskes probably respect admit that there are serious explanatory gaps in the Neo-Darwinian synthesis.[1] That intelligent design advocates have improved the scientific character of their arguments from their creationist origins—which I am happy to admit—is not something for the movement’s opponents to begrudge. Rather it shows that they learn from their mistakes, as any good team does when faced with a string of losses. Thus, one should expect an improvement in their performance. Admittedly these matters become complicated in the US context, since the Constitution’s separation of church and state has been interpreted in recent times to imply the prohibition of any teaching material that is motivated by specifically religious interests, as if the Founding Fathers were keen on institutionalising the genetic fallacy! Nevertheless, this blinkered interpretation has enabled the likes of Baker and Oreskes to continue arguing with earlier versions of ‘intelligent design creationism’, very much like generals whose expertise lies in having fought the previous war. But luckily, an increasingly informed public is not so easily fooled by such epistemically rearguard actions.

References

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

Collins, Harry, Robert Evans, Martin Weinel. “STS as Science or Politics?” Social Studies of Science.  47, no. 4 (2017): 580–586.

Csiszar, Alex. “From the Bureaucratic Virtuoso to Scientific Misconduct: Robert K. Merton, Robert and Eugene Garfield, and Goal Displacement in Science.” Paper delivered to annual meeting of the History of Science Society. Toronto: 9-12 November 2017.

Fuller, Steve. Social Epistemology. Bloomington IN: Indiana University Press, 1988.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2000.

Fuller, Steve. “Is STS All Talk and No Walk?” EASST Review 36 no. 1 (2017): https://easst.net/article/is-sts-all-talk-and-no-walk/.

Gellner, Ernest. Words and Things. London: Routledge, 1959.

Halevy, Elie. The Growth of Philosophic Radicalism. London: Faber and Faber, 1928.

Jackson, Ben. “At the Origins of Neo-Liberalism: The Free Economy and the Strong State, 1930-47.” Historical Journal 53, no. 1 (2010): 129-51.

Latour, Bruno. “Where are the Missing Masses? The Sociology of a Few Mundane Artefacts.” In Shaping Technology/Building Society, edited by Wiebe E. Bijker and John Law, 225-258. Cambridge MA: MIT Press. 1992

Latour, Bruno. ‘Why has critique run out of steam? From matters of fact to matters of concern’. Critical Inquiry 30, no. 2 (2004): 225–248.

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

Rothschild, Emma. Economic Sentiments. Cambridge MA: Harvard University Press, 2002.

Serres, Michel. and Bruno Latour. Conversations on Science, Culture, and Time. Ann Arbor: University of Michigan Press, 1995.

Schaffer, Simon and Steven Shapin. Leviathan and the Air-Pump. Princeton: Princeton University Press, 1985.

Sismondo, Sergio. “Not a Very Slippery Slope: A Reply to Fuller.” EASST Review 36, no. 2 (2017): https://easst.net/article/not-a-very-slippery-slope-a-reply-to-fuller/.

[1] Surprisingly for people who claim to be historians of science, Baker and Oreskes appear to have fallen for the canard that only Creationists mention Darwin’s name when referring to contemporary evolutionary theory. In fact, it is common practice among historians and philosophers of science to invoke Darwin to refer to his specifically purposeless conception of evolution, which remains the default metaphysical position of contemporary biologists—albeit one maintained with increasing conceptual and empirical difficulty. Here it is worth observing that such leading lights of the Discovery Institute as Stephen Meyer and Paul Nelson were trained in the history and philosophy of science, as was I.

Author Information:Erik Baker and Naomi Oreskes, Harvard University, ebaker@g.harvard.edu, oreskes@fas.harvard.edu

Baker, Erik and Naomi Oreskes. “It’s No Game: Post-Truth and the Obligations of Science Studies.”[1] Social Epistemology Review and Reply Collective 6, no. 8 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3FB

Please refer to:

Image credit: Walt Stoneburner, via flickr

In late April, 2017, the voice of a once-eminent institution of American democracy issued a public statement that embodied the evacuation of norms of truth and mutual understanding from American political discourse that since the 2016 presidential election has come to be known as “post-truth.” We aren’t talking about Donald Trump, whose habitual disregard of factual knowledge is troubling, to be sure, and whose advisor, Kellyanne Conway, made “alternative facts” part of the lexicon. Rather, we’re referring to the justification issued by New York Times opinion page editor James Bennet in defense of his decision to hire columnist Bret Stephens, a self-styled “climate agnostic,” and his spreading talking points of the fossil fuel industry-funded campaign to cast doubt on the scientific consensus on climate change and the integrity of climate scientists.[2] The notion of truth made no appearance in Bennet’s statement. “If all of our columnists and all of our contributors and all of our editorials agreed all the time,” he explained, “we wouldn’t be promoting the free exchange of ideas, and we wouldn’t be serving our readers very well.”[3] The intellectual merits of Stephens’ position are evidently not the point. What counts is only the ability to grease the gears of the “free exchange of ideas.”

Bennet’s defense exemplifies the ideology of the “marketplace of ideas,” particularly in its recent, neoliberal incarnation. Since the 1970s, it has become commonplace throughout much of Europe and America to evince suspicion of attempts to build public consensus about facts or values, regardless of motivation, and to maintain that the role of public-sphere institutions—including newspapers and universities—is simply to place as many private opinions as possible into competition (“free exchange”) with one another.[4] If it is meaningful to talk about a “post-truth” moment, this ideological development is surely among its salient facets. After all, “truth” has not become any more or less problematic as an evaluative concept in private life, with its countless everyday claims about the world. Only public truth claims, especially those with potential to form a basis for collective action, now seem newly troublesome. To the extent that the rise of “post-truth” holds out lessons for science studies, it is not because the discipline has singlehandedly swung a wrecking ball through conventional epistemic wisdom (as some practitioners would perhaps like to imagine[5]), but because the broader rise of marketplace-of-ideas thinking has infected even some of its most subversive-minded work.

Science as Game

In this commentary, we address and critique a concept commonly employed in theoretical science studies that is relevant to the contemporary situation: science as game. While we appreciate both the theoretical and empirical considerations that gave rise to this framework, we suggest that characterizing science as a game is epistemically and politically problematic. Like the notion of a broader marketplace of ideas, it denies the public character of factual knowledge about a commonly accessible world. More importantly, it trivializes the significance of the attempt to obtain information about that world that is as right as possible at a given place and time, and can be used to address and redress significant social issues. The result is the worst of both worlds, permitting neither criticism of scientific claims with any real teeth, nor the possibility of collective action built on public knowledge.[6] To break this stalemate, science studies must become more comfortable using concepts like truth, facts, and reality outside of the scare quotes to which they are currently relegated, and accepting that the evaluation of knowledge claims must necessarily entail normative judgments.[7]

Philosophical talk of “games” leads directly to thoughts of Wittgenstein, and to the scholar most responsible for introducing Wittgenstein to science studies, David Bloor. While we have great respect for Bloor’s work, we suggest that it carries uncomfortable similarities between the concept of science as a game in science studies and the neoliberal worldview. In his 1997 Wittgenstein, Rules and Institutions, Bloor argues for an analogy between his interpretation of the later Wittgenstein’s theory of meaning (central to Bloor’s influential writing on science) and the theory of prices of the neoliberal pioneer Ludwig von Mises. “The notion of the ‘real meaning’ of a concept or a sign deserves the same scorn as economists reserve for the outdated and unscientific notion of the ‘real’ or ‘just’ price of a commodity,” Bloor writes. “The only real price is the price paid in the course of real transactions as they proceed von Fall zu Fall. There is no standard outside these transactions.”[8] This analogy is the core of the marketplace of ideas concept, as it would later be developed by followers of von Mises, particularly Friedrich von Hayek. Just as there is no external standard of value in the world of commodities, there is no external standard of truth, such as conformity to an empirically accessible reality, in the world of science.[9] It is “scientism” (a term that von Hayek popularized) to invoke support for scientific knowledge claims outside of the transactions of the marketplace of ideas. Just as, for von Hayek and von Mises, the notion of economic justice falls in the face of the wisdom of the marketplace, so too does the notion of truth, at least as a regulative ideal to which any individual or finite group of people can sensibly aspire.

Contra Bloor (and von Hayek), we believe that it is imperative to think outside the sphere of market-like interactions in assessing both commodity prices and conclusions about scientific concepts. The prices of everything from healthcare and housing to food, education and even labor are hot-button political and social issues precisely because they affect people’s lives, sometimes dramatically, and because markets do not, in fact, always values these goods and services appropriately. Markets can be distorted and manipulated. People may lack the information necessary to judge value (something Adam Smith himself worried about). Prices may be inflated (or deflated) for reasons that bear little relation to what people value. And, most obviously in the case of environmental issues, the true cost of economic activity may not be reflected in market prices, because pollution, health costs, and other adverse effects are externalized. There is a reason why Nicholas Stern, former chief economist of the World Bank, has called climate change the “greatest market failure ever seen.”[10] Markets can and do fail. Prices do not always reflect value. Perhaps most important, markets refuse justice and fairness as categories of analysis. As Thomas Piketty has recently emphasized, capitalism typically leads to great inequalities of wealth, and this can only be critiqued by invoking normative standards beyond the values of the marketplace.[11]

External normative standards are indispensable in a world where the outcome of the interactions within scientific communities matter immensely to people outside those communities. This requirement functions both in the defense of science, where appropriate, and the critique of it.[12] The history of scientific racism and sexism, for example, speaks to the inappropriateness of public deference to all scientific claims, and the necessity of principled critique.[13] Yet, the indispensability of scientific knowledge to political action in contemporary societies also demands the development of standards that justify public acceptance of certain scientific claims as definitive enough to ground collective projects, such as the existence of a community-wide consensus or multiple independent lines of evidence for the same conclusion.[14] (Indeed, we regard the suggestion of standards for the organization of scientific communities by Helen Longino as one of the most important contributions of the field of social epistemology.[15])

Although we reject any general equivalency between markets and scientific communities, we agree they are indeed alike in one key way: they both need regulation. As Jürgen Habermas once wrote in critique of Wittgenstein, “language games only work because they presuppose idealizations that transcend any particular language game; as a necessary condition of possibly reaching understanding, these idealizations give rise to the perspective of an agreement that is open to criticism on the basis of validity claims.”[16] Collective problem-solving requires that these sorts of external standards be brought to bear. The example of climate change illustrates our disagreement with Bloor (and von Mises) on both counts in one fell swoop. Though neither of us is a working economist, we nonetheless maintain that it is rational—on higher-order grounds external to the social “game” of the particular disciplines—for governments to impose a price on carbon (i.e., a carbon tax or emissions trading system), in part because we accept that the natural science consensus on climate change accurately describes the physical world we inhabit, and the social scientific consensus that a carbon pricing system could help remedy the market failure that is climate change.[17]

Quietism and Critique

We don’t want to unfairly single out Bloor. The science-as-game view—and its uncomfortable resonances with marketplace-of-ideas ideology—crops up in the work of many prominent science studies scholars, even some who have quarreled publicly with Bloor and the strong programme. Bruno Latour, for example, one of Bloor’s sharpest critics, draws Hayekian conclusions from different methodological premises. While Bloor invokes social forces to explain the outcome of scientific games,[18] Latour rejects the very idea of social forces. Rather, he claims, as Margaret Thatcher famously insisted, that “there is no such thing as ‘the social’ or ‘a society.’”[19] But whereas Thatcher at least acknowledged the existence of family, for Latour there are only monadic actants, competing “agonistically” with each other until order spontaneously emerges from the chaos, just as in a game of Go (an illustration of which graces the cover of his seminal first book Laboratory Life, with Steve Woolgar).[20] Social structures, evaluative norms, even “publics,” in his more recent work, are all chimeras, devoid of real meaning until this networked process has come to fulfillment. If that view might seem to make collective action for wide-reaching social change difficult to conceive, Latour agrees: “Seen as networks, … the modern world … permits scarcely anything more than small extensions of practices, slight accelerations in the circulation of knowledge, a tiny extension of societies, miniscule increases in the number of actors, small modifications of old beliefs.”[21] Rather than planning political projects with any real vision or bite—or concluding that a particular status-quo might be problematic, much less illegitimate—one should simply be patient, play the never-ending networked game, and see what happens.[22] But a choice for quietism is a choice nonetheless—“we are condemned to act,” as Immanuel Wallerstein once put it—one that supports and sustains the status quo.[23] Moreover, a sense of humility or fallibility by no means requires us to exaggerate the inevitability of the status quo or yield to the power of inertia.[24]

Latour has at least come clean about his rejection of any aspiration to “critique.”[25] But others who haven’t thrown in the towel have still been led into a similar morass by their commitment to a marketlike or playful view of science. The problem is that, if normative judgments external to the game are illegitimate, analysts are barred from making any arguments for or against particular views or practices. Only criticism of their premature exclusion from the marketplace is permitted. This standpoint interprets Bloor’s famous call for symmetry not so much as a methodological principle in intellectual analysis, but as a demand for the abandonment of all forms of epistemic and normative judgment, leading to the bizarre sight of scholars championing a widely-criticized “scientific” or intellectual cause while coyly refusing to endorse its conclusions themselves. Thus we find Bruno Latour praising the anti-environmentalist Breakthrough Institute while maintaining that he “disagrees with them all the time;” Sheila Jasanoff defending the use of made-to-order “litigation science” in courtrooms on the grounds of a scrupulous “impartiality” that rejects scholarly assessments of intellectual integrity or empirical adequacy in favor of letting “the parties themselves do more of the work of demarcation;” and Steve Fuller defending creationists’ insistence that their views should be taught in American science classrooms while remaining ostensibly “neutral” on the scientific question at issue.[26]

Fuller’s defense of creationism, in particular, shows the way that calls for “impartiality” are often in reality de facto side-taking: Fuller takes rhetorical tropes directly out of the creationist playbook, including his tendentious and anachronistic labelling of modern evolutionary biologists as “Darwinists.” Moreover, despite his explicit endorsement of the game view of science, Fuller refuses to accept defeat for the intelligent design project, either within the putative game of science, or in the American court system, which has repeatedly found the teaching of creationism to be unconstitutional. Moreover, Fuller’s insistence that creationism somehow has still not received a “fair run for its money” reveals that even he cannot avoid importing external standards (in this case fairness) to evaluate scientific results! After all, who ever said that science was fair?

In short, science studies scholars’ ascetic refusal of standards of good and bad science in favor of emergent judgments immanent to the “games” they analyze has vitiated critical analysis in favor of a weakened proceduralism that has struggled to resist the recent advance of neoliberal and conservative causes in the sciences. It has led to a situation where creationism is defended as an equally legitimate form of science, where the claims of think tanks that promulgate disinformation are equated with the claims of academic scientific research institutions, and corporations that have knowingly suppressed information pertinent to public health and safety are viewed as morally and epistemically equivalent to the plaintiffs who are fighting them. As for Fuller, leaving the question of standards unexamined and/ or implicit, and relying instead on the rhetoric of the “game,” enables him to avoid the challenge of defending a demonstrably indefensible position on its actual merits.

Where the Chips Fall

In diverse cases, key evaluative terms—legitimacy, disinformation, precedent, evidence, adequacy, reproducibility, natural (vis-à-vis supernatural), and yes, truth—have been so relativized and drained of meaning that it starts to seem like a category error even to attempt to refute equivalency claims. One might argue that this is alright: as scholars, we let the chips fall where they may. The problem, however, is that they do not fall evenly. The winner of this particular “game” is almost always status quo power: the conservative billionaires, fossil fuel companies, lead and benzene and tobacco manufacturers and others who have bankrolled think tanks and “litigation science” at the cost of biodiversity, human health and even human lives.[27] Scientists paid by the lead industry to defend their toxic product are not just innocently trying to have their day in court; they are trying to evade legal responsibility for the damage done by their products. The fossil fuel industry is not trying to advance our understanding of the climate system; they are trying to block political action that would decrease societal dependence on their products. But there is no way to make—much less defend—such claims without a robust concept of evidence.

Conversely, the communities, already victimized by decades of poverty and racial discrimination, who rely on reliable science in their fight for their children’s safety are not unjustly trying to short-circuit a process of “demarcation” better left to the adversarial court system.[28] It is a sad irony that STS, which often sees itself as championing the subaltern, has now in many cases become the intellectual defender of those who would crush the aspirations of ordinary people.

Abandoning the game view of science won’t require science studies scholars to reinvent the wheel, much less re-embrace Comtean triumphalism. On the contrary, there are a wide variety of perspectives from the history of epistemology, philosophy of science, and feminist, anti-racist, and anti-colonialist theory that permit critique that can be both epistemic and moral. One obvious source, championed by intellectual historians such as James Kloppenberg and philosophers such as Hilary Putnam and Jürgen Habermas, is the early American pragmatism of John Dewey and William James, a politically constructive alternative to both naïve foundationalism and the textualist rejection of the concept of truth found in the work of more recent “neo-pragmatists” like Richard Rorty.[29] Nancy Cartwright, Thomas Uebel, and John O’Neill have similarly reminded us of the intellectual and political potential in the (widely misinterpreted, when not ignored) “left Vienna Circle” philosophy of Otto Neurath.[30]

In a slightly different vein, Charles Mills, inspired in part by the social science of W.E.B. Du Bois, has insisted on the importance of a “veritistic” epistemological stance in characterizing the ignorance produced by white supremacy.[31] Alison Wylie has emphasized the extent to which many feminist critics of science “are by no means prepared to concede that their accounts are just equal but different alternatives to those they challenge,” but in fact often claim that “research informed by a feminist angle of vision … is simply better in quite conventional terms.”[32] Steven Epstein’s work on AIDS activism demonstrates that social movements issuing dramatic challenges to biomedical and scientific establishments can make good use of unabashed claims to genuine knowledge and “lay” expertise. Epstein’s work also serves as a reminder that moral neutrality is not the only, much less the best, route to rigorous scholarship.[33] Science studies scholars could also benefit from looking outside their immediate disciplinary surroundings to debates about poststructuralism in the analysis of (post)colonialism initiated by scholars like Benita Parry and Masao Miyoshi, as well as the emerging literature in philosophy and sociology about the relationship of the work of Michel Foucault to neoliberalism.[34]

For our own part, we have been critically exploring the implications of the institutional and financial organization of science during the Cold War and the recent neoliberal intensification of privatization in American society.[35] We think that this work suggests a further descriptive inadequacy in the science-as-game view, in addition to the normative inadequacies we have already described. In particular, it drives home the extent to which the structure of science is not constant. From the longitudinal perspective available to history, as opposed to sociological or ethnographic snapshot, it is possible to resolve the powerful societal forces—government, industry, and so on—driving changes in the way science operates, and to understand the way those scientific changes relate to broader political-economic imperatives and transformations. Rather than throwing up one’s hands and insisting that incommensurable particularity is all there is, science studies scholars might instead take a theoretical position that will allow us to characterize and respond to the dramatic transformations of academic work that are happening right now, and from which the humanities are by no means exempt.[36]

Academics must not treat themselves as isolated from broader patterns of social change, or worse, deny that change is a meaningful concept outside of the domain of microcosmic fluctuations in social arrangements. Powerful reactionary forces can reshape society and science (and reshape society through science) in accordance with their values; progressive movements in and outside of science have the potential to do the same. We are concerned that the “game” view of science traps us instead inside a Parmenidean field of homogenous particularity, an endless succession of games that may be full of enough sound and fury to interest scholars but still signify nothing overall.

Far from rendering science studies Whiggish or simply otiose, we believe that a willingness to discriminate, outside of scare quotes, between knowledge and ignorance or truth and falsity is vital for a scholarly agenda that respects one of the insights that scholars like Jasanoff have repeatedly and compellingly championed: in contemporary democratic polities, science matters. In a world where physicists state that genetic inferiority is the cause of poverty among black Americans, where lead paint manufacturers insist that their product does no harm to infants and children, and actresses encourage parents not to vaccinate their children against infectious diseases, an inability to discriminate between information and disinformation—between sense and nonsense (as the logical positivists so memorably put it)—is not simply an intellectual failure. It is a political and moral failure as well.

The Brundtland Commission famously defined “sustainable development” as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Like the approach we are advocating here, this definition treats the empirical and the normative as enfolded in one another. It sees them not as constructions that emerge stochastically in the fullness of time, but as questions that urgently demand robust answers in the present. One reason science matters so much in the present moment is its role in determining which activities are sustainable, and which are not. But if scientists are to make such judgments, then we, as science studies scholars, must be able to judge the scientists—positively as well as critically. Lives are at stake. We are not here merely to stand on the sidelines insisting that all we can do is ensure that all voices are heard, no matter how silly, stupid, or nefarious.

[1] We would like to thank Robert Proctor, Mott Greene, and Karim Bschir for reading drafts and providing helpful feedback on this piece.

[2] For an analysis of Stephens’ column, see Robert Proctor and Steve Lyons, “Soft Climate Denial at The New York Times,” Scientific American, May 8, 2017; for the history of the campaign to cast doubt on climate change science, see Naomi Oreskes and Erik M. Conway, Merchants of Doubt (Bloomsbury Press, 2010); for information on the funding of this campaign, see in particular Robert J. Bruelle, “Institutionalizing delay: foundation funding and the creation of U.S. climate change counter-movement organizations,” Climatic Change 122 (4), 681–694, 2013.

[3] Accessible at https://twitter.com/ErikWemple/status/858737313601507329.

[4] For the recency of the concept, see Stanley Ingber, “The Marketplace of Ideas: A Legitimizing Myth,” Duke Law Journal, February 1984. The significance of the epistemological valorization of the marketplace of ideas to the broader neoliberal project has been increasingly well-understood by historians of neoliberalism; it is an emphasis, for instance, to the approach taken by the contributors to Philip Mirowski and Dieter Plehwe, eds., The Road from Mont Pèlerin (Harvard, 2009), especially Mirowski’s “Postface.”

[5] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[6] See for instance John Ziman, Public Knowledge: An Essay Concerning the Social Dimension of Science (Cambridge University Press, 1968); as well as the many more recent perspectives we hold up below as exemplary of alternative approaches.

[7] Naomi Oreskes and Erik M. Conway. “Perspectives on global warming: A Book Symposium with Steven Yearley, David Mercer, and Andy Pitman.” Metascience vol. 21, pp. 531-559, 2012.

[8] David Bloor, Wittgenstein, Rules and Institutions (Routledge, 1997), pp. 76-77.

[9] As suggested by Helen Longino in The Future of Knowledge (Princeton University Press, 2001) as an alternative to the more vexed notion of “correspondence,” wrought with metaphysical difficulties Longino hopes to skirt. In Austrian economics, this rejection of the search for empirical, factual knowledge initially took the form, in von Mises’ thought, of the ostensibly purely deductive reasoning he called “praxaeology,” which was supposed to analytically uncover the imminent principles governing the economic game. Von Hayek went further, arguing that economics at its most rigorous merely theoretically explicates the limits of positive knowledge about empirical social realities. See, for instance, Friedrich von Hayek, “On Coping with Ignorance,” Ludwig von Mises Lecture, 1978.

[10] Nicholas H. Stern, The Economics of Climate Change: The Stern Review (Cambridge University Press, 2007).

[11] Thomas Piketty, Capital in the Twenty-First Century (Harvard/Belknap, 2013). In addition to critiquing market outcomes, philosophers have also invoked concepts of justice and fairness to challenge the extension of markets to new domains; see for example Michael Sandel, What Money Can’t Buy: The Moral Limits of Markets (Farrar, Straus, and Giroux, 2013) and Harvey Cox, The Market as God (Harvard University Press, 2016). This is also a theme in the Papal Encyclical on Climate Change and Inequality, Laudato Si. https://laudatosi.com/watch

[12] For more on this point, see Naomi Oreskes, “Systematicity is Necessary but Not Sufficient: On the Problem of Facsimile Science,” in press, Synthèse.

[13] See among others Helen Longino, Science as Social Knowledge (Princeton University Press, 1990); Londa Schiebinger, Has Feminism Changed Science? (Harvard University Press, 1999); Sandra Harding, Science and Social Inequality: Feminist and Postcolonial Issues (University of Illinois Press, 2006); Donna Haraway, Primate Visions: Gender, Race, and Nature in the World of Modern Science (Routledge, 1989); Evelynn Hammonds and Rebecca Herzig, The Nature of Difference: Sciences of Race in the United States from Jefferson to Genomics (MIT Press, 2008).

[14] Naomi Oreskes, “Trust in Science?” Tanner Lecture on Human Values, Princeton University, November 30, 2016; Naomi Oreskes, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?” in Joseph F. C. DiMento and Pamela Doughman, eds., Climate Change: What It Means for Us, Our Children, and Our Grandchildren (MIT Press, 2007), pp. 65-99.

[15] Helen Longino, Science as Social Knowledge (Princeton University Press, 1990), and The Future of Knowledge (Princeton University Press, 2001).

[16] Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984), p. 199.

[17] See, for instance, Naomi Oreskes, “Without government, the market will not solve climate change: Why a meaningful carbon tax may be our only hope,” Scientific American (December 22, 2015), Naomi Oreskes and Jeremy Jones, “Want to protect the climate? Time for carbon pricing,” Boston Globe (May 3, 2017).

[18] Along with a purportedly empirical component that, as Latour has compellingly argued, is “canceled out” out of the final analysis because of its common presence to both parties in a dispute. See Bruno Latour, “For Bloor and Beyond: a Reply to David Bloor’s Anti-Latour,” Studies in History and Philosophy of Science, vol. 30 (1), pp.113-129, March 1998.

[19] Bruno Latour, Reassembling the Social: An Introduction to Actor-Network Theory (Oxford University Press, 2007), p. 5; this theme is an emphasis of his entire oeuvre. On Thatcher, see http://briandeer.com/social/thatcher-society.htm and James Meek, Private Island (Verso, 2014).

[20] Bruno Latour and Steve Woolgar, Laboratory Life: The Construction of Scientific Facts (Routledge, 1979/1986); Bruno Latour, Science in Action (Harvard University Press, 1987). In Laboratory Life this emergence of order from chaos is explicitly analyzed as the outcome of a kind of free market in scientific “credit.” Spontaneous order is one of the foundational themes of Hayekian thought, and the game of Go is an often-employed analogy there as well. See, for instance, Peter Boettke, “The Theory of Spontaneous Order and Cultural Evolution in the Social Theory of F.A. Hayek,” Cultural Dynamics, vol. 3 (1), pp. 61-83, 1990; Gustav von Hertzen, The Spirit of the Game (CE Fritzes AB, 1993), especially chapter 4.

[21] Bruno Latour, We Have Never Been Modern (Harvard University Press, 1993), pp. 47-48; for his revision of the notion of the public, see for example Latour’s Politics of Nature (Harvard University Press, 2004). For a more in-depth discussion of Latour vis-à-vis neoliberalism, see Philip Mirowski, “What Is Science Critique? Part 1: Lessig, Latour,” keynote address to Workshop on the Changing Political Economy of Research and Innovation, UCSD, March 2015.

[22] Our criticism here is not merely hypothetical. Latour’s long-time collaborator Michel Callon and the legal scholar David S. Caudill, for example, have both used Latourian actor-network theory to argue that critics of the privatization of science such as Philip Mirowski are mistaken and analysts should embrace, or at least concede the inevitability of, “hybrid” science that responds strongly to commercial interests. See Michel Callon, “From Science as an Economic Activity to Socioeconomics of Scientific Research,” in Philip Mirowski and Esther-Mirjam Sent, eds. Science Bought and Sold (University of Chicago Press, 2002); and David S. Caudill, “Law, Science, and the Economy: One Domain?” UC Irvine Law Review vol. 5 (393), pp. 393-412, 2015.

[23] Immanuel Wallerstein, The Essential Wallerstein (The New Press, 2000), p. 432.

[24] Naomi Oreskes, “On the ‘reality’ and reality of anthropogenic climate change,” Climatic Change vol. 119, pp. 559-560, 2013, especially p. 560 n. 4. Many philosophers have made this point. Hilary Putnam, for example, has argued that fallibilism actually demands a critical attitude, one that seeks to modify beliefs for which there is sufficient evidence to believe that they are mistaken, while also remaining willing to make genuine knowledge claims on the basis of admittedly less-than-perfect evidence. See his Realism with a Human Face (Harvard University Press, 1990), and Pragmatism: An Open Question (Oxford, 1995) in particular.

[25] Bruno Latour, “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry vol. 30 (Winter 2004).

[26] “Bruno Latour: Modernity is a Politically Dangerous Goal,” November 2014 interview with Latour by Patricia Junge, Colombina Schaeffer and Leonardo Valenzuela of Verdeseo; Zoë Corbyn, “Steve Fuller : Designer trouble,” The Guardian (January 31, 2006); Sheila Jasanoff, “Representation and Re-Presentation in Litigation Science,” Environmental Health Perspectives 116(1), pp. 123–129, January 2008. Fuller also has a professional relationship with the Breakthrough Institute, but the Institute seems somewhat fonder, in their publicity materials, of their connection with Latour.

[27] Even creationism, it’s worth remembering, is a big-money movement. The Discovery Institute, perhaps the most prominent “intelligent design” advocacy organization, is bankrolled largely by wealthy Republican donors, and was co-founded by notorious Reaganite supply-side economics guru and telecom deregulation champion George Gilder. See Jodi Wilgoren, “Politicized Scholars Put Evolution on the Defensive,” New York Times, August 21, 2005. Similarly, so-called grassroots anti-tax organizations often had links to the tobacco industry. See http://www.sourcewatch.org/index.php/Americans_for_Tax_Reform_and_Big_Tobacco The corporate exploitation of ambiguity about the contours of disinformation can, of course, also take more anodyne forms, as in manipulative use of phrases like “natural flavoring” on food packaging. We thank Mott Greene for this example.

[28] David Rosner and Gerald Markowitz, Lead Wars: The Politics of Science and the Fate of America’s Children (University of California Press, 2013). See also Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution (University of California Press, 2nd edition 2013); and Stanton Glantz, ed., The Cigarette Papers (University of California Press, 1998).

[29] See James Kloppenburg, “Pragmatism: An Old Name for Some New Ways of Thinking?,” The Journal of American History, Vol. 83 (1), pp. 100-138, June 1996, which argues that Rorty misrepresents in many ways the core insights of the early pragmatists. See also Jürgen Habermas, Theory of Communicative Action (Beacon Press, vol. 1 1984, vol. 2 1987); Hilary Putnam, Reason, Truth, and History (Cambridge University Press, 1981); see also William Rehg’s development of Habermas’s ideas on science in Cogent Science in Context: The Science Wars, Argumentation Theory, and Habermas (MIT Press, 2009).

[30] Nancy Cartwright, Jordi Cat, Lola Fleck, and Thomas Uebel, Otto Neurath: Philosophy between Science and Politics (Cambridge University Press, 1996); Thomas Uebel, “Political philosophy of science in logical empiricism: the left Vienna Circle,” Studies in History and Philosophy of Science, vol. 36, pp. 754-773, 2005; John O’Neill, “Unified science as political philosophy: positivism, pluralism and liberalism,” Studies in History and Philosophy of Science, vol. 34, pp. 575-596, 2003.

[31] Charles Mills, “White Ignorance,” in Robert Proctor and Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (Stanford University Press, 2008); see also his recent Black Rights/White Wrongs (Oxford University Press, 2017).

[32] Alison Wylie, Thinking from Things: Essays in the Philosophy of Archaeology (University of California Press, 2002), p. 190. Helen Longino (Science as Social Knowledge, 1999) and Sarah Richardson (Sex Itself, University of Chicago Press, 2013), have made similar arguments about research in endocrinology and genetics.

[33] Steven Epstein, Impure Science (University of California Press, 1996); see especially pp. 13-14.

[34] See for instance Benita Parry, Postcolonial Studies: A Materialist Critique (Routledge, 2004); Masao Miyoshi, “Ivory Tower in Escrow,” boundary 2, vol. 27 (1), pp. 7-50, Spring 2000. On Foucault, see recently Daniel Zamora and Michael C. Behrent, eds., Foucault and Neoliberalism (Polity Press, 2016); but note also the seeds of this critique in earlier works such as Jürgen Habermas, The Philosophical Discourse of Modernity (MIT Press, 1984) and Nancy Fraser, “Michel Foucault: A ‘Young Conservative’?”, Ethics vol 96 (1), pp. 165-184, 1985, and “Foucault on Modern Power: Empirical Insights and Normative Confusions,” Praxis International, vol. 3, pp. 272-287, 1981.

[35] Naomi Oreskes and John Krige, eds., Science and Technology in the Global Cold War (MIT Press, 2015); Naomi Oreskes, Science on a Mission: American Oceanography in the Cold War (University of Chicago Press, forthcoming); Erik Baker, “The Ultimate Think Tank: Money and Science at the Santa Fe Institute,” manuscript in preparation.

[36] See, for instance, Philip Mirowski, Science-Mart (Harvard University Press, 2010); Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (MIT Press, 2015); Henry Giroux, Neoliberalism’s War on Higher Education (Haymarket Books, 2014); Sophia McClennen, “Neoliberalism and the Crisis of Intellectual Engagement,” Works and Days, vols. 26-27, 2008-2009.

Author Information: Amanda Phillips, Virginia Tech, akp@vt.edu

Phillips, Amanda. “Playing the Game in a Post-Truth Era.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 54-56.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3F9

Please refer to:

Image credit: Keith Allison, via flickr

In 2008 Major League Baseball (MLB) became the last of the four major North American professional sports leagues to introduce the use of video instant replay in reviewing close or controversial calls. Soon after, in 2014, MLB permitted team managers to challenge calls made by umpires at least once during game play. To anyone even marginally familiar with the ideology of baseball in American life, the relatively late implementation of replay technology should come as no surprise. The traditions of the sport have proven resilient against the pressures of time. Baseball’s glacial pace, ill-fitting uniforms, and tired ballpark traditions harken back to a time when America’s greatness was, perhaps, clearer. I am neither the first, nor will I be the last, to state that baseball represents an idealized national conservatism—fetishized through pining nostalgia and a cult-like devotion to individual abilities and judgment. It is a team sport for those averse to the compromises of glory inherent within the act of teamwork.

The same proves true for the judgment of umpires. Instant replay usurped their individual legitimacy as knowers and interpreters of play on the diamond. The truth of play changed with the introduction instant replay review. This goes beyond Marshal McLuhan’s reflection on the impact of instant replay on (American) football. McLuhan stated in an interview that audiences “… want to see the nature of the play. And so they’ve had to open up the play … to enable the audience to participate more fully in the process of football play.” [1]

By 2008, audiences knew how to participate in sporting events, how to adjust their voices to yell about the umpirical incompetence unfolding on screen. Instead, the introduction of review changed how truth operates within baseball. The expertise of umpires now faces the ever-present threat of challenge from both mechanical and managerial sources. Does this change, the displacement of trust in umpires, mean that baseball, like the rest of American society, has entered a regime of post-truth?

Political Post-Truth

The realities and responses to the current era of political post-truth hang heavy in the hearts of many. Steve Fuller (2017) in ‘Is STS all Talk and No Walk?’ concludes that in order to challenge the ‘deplorables’ who tout our epistemology but not our politics, we need to conceptualize our work as more of a game, a sport to be played. This argument comes out of a larger field-based conversation between Fuller and Sergio Sismondo (2017) on how STS can best respond to the post-truth world it (apparently) created.

On one hand, Sismondo looks to a future where STS researchers shore up scientific and technical institutions, or at the very least find ways to collectively defend areas once guarded by the now pariah ‘expert’.[2] On the other hand, Fuller argues that the field needs to continue its commitment to epistemic democratization—regardless of how this pursuit might upset what we understand as the social order to things. Fuller’s desire to think about scholarship as a sport serves as a call to action to recognize that our play book of challenging truth-claims might be stolen, but that does not mean that not yet imagined strategies could win the game.

Our options thus appear to be that we can retreat and reify, or innovate and outwit. While I personally find Fuller’s suggestion the more intriguing of the two, I have concerns about bringing the win-lose binary of sport to the forefront of disciplinary and research priorities. While Fuller idealizes the so-called free space of game play, rarely do teams start on the even ground to which he alludes. Take, for example, the ‘mortar kick’. [3]

In 2016 the National Football League (NFL) instituted a rule change that influenced where a ball would be placed in the event of a touchback after a kickoff.[4] The change moved the ball up five yards to the 25-yard line to encourage teams to take the touchback rather than receiving the ball and trying to run to favorable field position.

This rule was created with the explicit purpose of making kickoffs safer by incentivizing a team to not jockey for field position and risk player injury. This result was soon defeated by the New England Patriots who started utilizing mortar kicks during kickoffs. These kicks arc extremely high in the air and aim to land around the 5-yard line. The kick does two things. It forces the receiving team to catch the ball and run toward field position, and it gives the defending team additional time to get downfield to thwart the attempted run. This play, while legal, defeats the specific intentions of the rule change. The Patriots innovated game play around a barrier, but in doing so privileged strategy over safety. Such strategies are born of a crafty and vulpine spirit. Does STS want to emulate Bill Belichick and the controversy embroiled Patriots?[5]

The Cost of Winning

The mortar kick brings to light a fault with the metaphor Fuller wishes to embrace. Despite the highly structured and rule-driven orientation of sports (and science for that matter), the introduction of the mortar kick suggests that the drive to win comes at a cost—a cost that sacrifices values such as safety and integrity. We working in STS are not strangers to how values get incorporated or discarded within scientific and technical processes. But it seems odd from a research perspective that we might begin to orient ourselves towards knowingly emulating the institutional processes we analyze, criticize, and seek to understand just to come out a temporary victor in the contemporary social battlefield. There is no doubt that the current post-truth landscape poses problems for both progressive political values and epistemic claims. But I am hesitant to follow Fuller’s metaphor to its terminus if we do not have a clear sense of which team is ours.

At the risk of invoking the equivalent of a broken record in STS, what stood out to me from Latour’s 2004 article was not the waving of a white flag, but rather the suggestion of developing a critique “with multiplication, not subtraction”. While this call does not seem to have been widely embraced by our field, I think there is room to experiment. I can envision a future STS that embraces a collective multiplicity of critical thought. Let us not concern ourselves with winning, but rather a gradual overwhelming. If “normative categories of science … are moveable feasts the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties” (Fuller 2017), let us make explicitly clear what movability does and how it comes to be. Let us conceptualize labor and research more collectively so that we more thoroughly examine the many and conflicting claims to truth which we face.

If we must play a game, let us not emulate the model that academia has placed before us. This turns out to be a game that looks a whole lot like baseball—set in its ways, individualistic, and often times boring (but better with a beer in hand). Change is more disruptive in a sport reliant on tradition. But, as shown with the introduction of video review, the post-truth world makes it easier to question and challenge authority. This change can not only give rise to the deplorable but also, perhaps, the multiple. If the only way for STS to walk the walk is to the play the game, we will have to conceptualize our team—and more importantly how we work together—in more than just idioms.

References

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective (2016): http://wp.me/p1Bfg0-3nx.

Fuller, Steve. “Is STS all Talk and no Walk?” EASST Review 36, no. 1 (2017):  https://easst.net/article/is-sts-all-talk-and-no-walk/.

Latour, Bruno. “Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30, no. 2 (2004): 225–248.

Sismondo, Sergio. “Post-Truth?” Social Studies of Science 47, no. 1 (2017): 3-6.

[1] “Marshall McLuhan on Football 2.0” https://www.youtube.com/watch?time_continue=95&v=3A_O7M3PQ-o

[2] His mention of “physicians and patients” who would need to step up in the advent of FDA deregulation seems to overlook the many examples of institutions, scientific and otherwise, failing those they intend to serve. Studies looking at citizen science and activism show that it did not take the Trump administration to cause individuals to step into the role of self-advocate in the face of regulatory incompetence.

[3] http://www.sharpfootballanalysis.com/blog/2016/why-mortar-kicks-can-win-games-in-2016.

[4] A touchback occurs a kicker from defending team kicks the ball on or over the receiving teams goal line. In the event of a touchback, the ball is placed at a specified point on the field.

[5] Sorry Boston.