Search Results For "Steve Fuller"

Author Information: Steve Fuller, University of Warwick,

Fuller, Steve. “Counterfactuals in the White House:  A Glimpse into Our Post-Truth Times.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: OZinOH, via flickr

May Day 2017 was filled with reporting and debating over a set of comments that US President Trump made while visiting Andrew Jackson’s mansion, the ‘Hermitage’, now a tourist attraction in Nashville, Tennessee. Trump said that had Jackson been deployed, he could have averted the US Civil War. Since Jackson had died about fifteen years before the war started, Trump was clearly making a counterfactual claim. However, it is an interesting claim—not least for its responses, which were fast and furious. They speak to the nature of our times.  Let me start with the academic response and then move to how I think about the matter. A helpful compendium of the responses is here.

Jim Grossman of the American Historical Association spoke for all by claiming that Trump ‘is starting from the wrong premise’. Presumably, Grossman means that the Civil War was inevitable because slavery is so bad that a war over it was inevitable. However well he meant this comment, it feeds into the anti-expert attitude of our post-truth era. Grossman seems to disallow Trump from imagining that preserving the American union was more important than the end of slavery—even though that was exactly how the issue was framed to most Americans 150 years ago. Scholarship is of course mainly about explaining why things happened the way they did. However, there is a temptation to conclude that it necessarily had to happen that way. Today’s post-truth culture attempts to curb this tendency. In any case, once the counterfactual door is open to other possible futures, historical expertise becomes more contestable, perhaps even democratised. The result may be that even when non-experts reach the same conclusion as the experts, it may be for importantly different reasons.

Who was Andrew Jackson?

Andrew Jackson is normally regarded as one of the greatest US presidents, whose face is regularly seen on the twenty-dollar banknote. He was the seventh president and the first one who was truly ‘self-made’ in the sense that he was not well educated, let alone oriented towards Europe in his tastes, as had been his six predecessors. It would not be unfair to say that he was the first President who saw a clear difference between being American and being European. In this respect, his self-understanding was rather like that of the heroes of Latin American independence. He was also given to an impulsive manner of public speech, not so different from the current occupant of the Oval Office.

Jackson volunteered at age thirteen to fight in the War of Independence from Britain, which was the first of many times when he was ready to fight for his emerging nation. Over the past fifty years much attention has been paid to his decimation of native American populations at various points in his career, both military and presidential, as well as his support for slavery. (Howard Zinn was largely responsible, at least at a popular level, for this recent shift in focus.) To make a long and complicated story short, Jackson was rather consistent in acting in ways that served to consolidate American national identity, even if that meant sacrificing the interests of various groups at various times—groups that arguably never recovered from the losses inflicted on them.

Perhaps Jackson’s most lasting positive legacy has been the current two-party—Democratic/Republican—political structure. Each party cuts across class lines and geographical regions. This achievement is now easy to underestimate—as the Democratic Party is now ruing. The US founding fathers were polarized about the direction that the fledgling nation should take, precisely along these divides. The struggles began in Washington’s first administration between his treasury minister Alexander Hamilton and his foreign minister Thomas Jefferson—and they persisted. Both Hamilton and Jefferson oriented themselves to Europe, Hamilton more in terms of what to imitate and Jefferson in terms of what to avoid. Jackson effectively performed a Gestalt switch, in which Europe was no longer the frame of reference for defining American domestic and foreign policy.

Enter Trump

Now enter Donald Trump, who says Jackson could have averted the Civil War, which by all counts was one of the bloodiest in US history, with an estimated two million lives in total lost. Jackson was clearly a unionist but also clearly a slaveholder. So one imagines that Jackson would have preserved the union by allowing slaveholding, perhaps in terms of some version of the ‘states rights’ or ‘popular sovereignty’ doctrine, which gives states discretion over how they deal with economic matters. It’s not unreasonable that Jackson could have pulled that off, especially because the economic arguments for allowing slavery were stronger back then than they are now normally remembered.

The Nobel Prize winning economic historian Robert Fogel explored this point quite thoroughly more than forty years ago in his controversial Time on the Cross. It is not a perfect work, and its academic criticism is quite instructive about how one might improve exploring a counterfactual world in which slavery would have persisted in the US until it was no longer economically viable. Unfortunately, the politically sensitive nature of the book’s content has discouraged any follow-up. When I first read Fogel, I concluded that over time the price of slaves would come to approximate that of free labour considered over a worker’s lifetime. In other words, a slave economy would evolve into a capitalist economy without violence in the interim. Slaveholders would simply respond to changing market conditions. So, the moral question is whether it would have made sense to extend slavery over a few years before it would end up merging with what the capitalist world took to be an acceptable way of being, namely, wage labour. Fogel added ballast to his argument by observing that slaves tend to live longer and healthier lives than freed Blacks.

Moreover, Fogel’s counterfactual was not fanciful. Some version of the states rights doctrine was the dominant sentiment in the US prior to the Civil War. However, there were many different versions of the doctrine which could not rally around a common spokesperson. This allowed the clear unitary voice for abolition emanating from the Christian dissenter community in the Northern states to exert enormous force, not least on the sympathetic and ambitious country lawyer, Abraham Lincoln, who became their somewhat unlikely champion. Thus, 1860 saw a Republican Party united around Lincoln fend off three Democrat opponents in the general election.

None of this is to deny that Lincoln was right in what he did. I would have acted similarly. Moreover, he probably did not anticipate just how bloody the Civil War would turn out to be—and the lasting scars it would leave on the American psyche. But the question on the table is not whether the Civil War was a fair price to pay to end slavery. Rather, the question is whether the Civil War could have been avoided—and, more to the point of Trump’s claim, whether Jackson would have been the man to do it. The answer is perhaps yes. The price would have been that slavery would have been extended for a certain period before it became economically unviable for the slaveholders.

It is worth observing that Fogel’s main target seemed to be Marxists who argued that slavery made no economic sense and that it persisted in the US only because of racist ideology.  Fogel’s response was that slaveholders probably were racist, but such a de facto racist economic regime would not have persisted as long as it did, had both sides not benefitted from the arrangement. In other words, the success of the anti-slavery campaign was largely about the triumph of aspirational ideas over actual economic conditions. If anything, its success testifies to the level of risk that abolitionists were willing to assume on behalf of American society for the emancipation of slaves. Alexis de Tocqueville was only the most famous of foreign US commentators to notice this at the time. Abolitionists were the proactionaries of their day with regard to risk. And this is how we should honour them now.

Author Information: Steve Fuller, University of Warwick,

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).


Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Steve Fuller, University of Warwick,

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the University of Warwick. He is the author of more than twenty books, the next of which is Post-Truth: Knowledge as a Power Game (Anthem).


Note: This article originally appeared in the EASST Review 36(1) April 2017 and is republished below with the permission of the editors.

Image credit: Hans Luthart, via flickr

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public—if not outright invented—at least four common post-truth tropes:

1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.

2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.

3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.

4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable—but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game—perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.


Fuller, Steve. ‘Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.’ Social Epistemology Review and Reply Collective December, 2016:

Latour, Bruno. ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.’ Critical Inquiry 30, no. 2 (2004) : 225–248.

Oreskes, Naomi and Erik M. Conway Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2011.

Sismondo, Sergio. ‘Post-Truth?’ Social Studies of Science 47, no. 1 (2017): 3-6.

Author Information: Lyudmila A. Markova, Russian Academy of Science,


Please refer to:

It is difficult to find a place for the concept of truth in social epistemology. Current philosophers disagree on the status “truth” and “objectivity” as the basis of thinking about science. Meanwhile, the very name ‘social epistemology’ speaks to a serious inevitable turn in our attitude toward scientific knowledge.  Once epistemology becomes social, scientific knowledge is oriented not to nature, but to human beings. Epistemology, then, addresses not the laws of nature, but the process of their production by a scientist. In classical epistemology we have, as a result of scientific research, laws regarding the material reality of the world created by us. Experimental results, obtained in classical science, must be objective and true, or they become useless.

In social epistemology, scientific results represent social communication among scientists (and not just among scientists), their ability to produce new knowledge, and their professionalism. In this case, knowledge helps us to create not a material artificial world, but a virtual world which is able to think. For such knowledge, notions like “truth” and “objectivity” do not play a serious role. Other concepts such as “dialog”, “communication”, “interaction”, “difference” and “diversity” come to the fore. In these concepts, we can see a turn in the development of epistemological thinking.

However, social epistemology does not destroy its predecessor. Let us remember this definition of social epistemology which Steve Fuller gives in 1988:

How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degree of access to one another’s activities?

It is not difficult to see that Fuller does not consider the aim of social epistemology as obtaining objective knowledge about the external world. He remains concerned about the diversity of social conditions in which scientists work. Changes in these conditions and features of an individual scientist such as professional competence, among others, should be taken into consideration.  Exactly these characteristics of thinking that come to the fore allow us to speak about a turn in the development of thinking. Now, the problems that exist in science and society require, for their solution, a new type of thinking. Still, we can find empirical reality the foundation both for classical (modern) and non-classical (based on social epistemology) logic.

Let us take an example. You bathe every day in the river Volga. You bathe today and you come to bathe tomorrow in the same river Volga. You cannot object that the river is still the Volga. Yet, at the same time, you see numerous changes from one day to the next—ripples appearing in, and new leaves appearing on, the water’s surface, the water temperature turning slightly colder and so on. It is possible to conclude that the river, after all, is not as it was yesterday. As Heraclitus famously observed: “You cannot enter the same river twice.”

Both conclusions are right. However, notions such as truth and objectivity did not lose their logical and historical significance; rather, they became marginal. Proponents of social epistemology should establish communication with classical logic and not try to destroy it.

Author Information: Steve Fuller, University of Warwick,


Editor’s Note: Steve Fuller’s “A Man for All Seasons, Including Ours: Thomas More as the Patron Saint of Social Media” originally appeared in ABC Religion and Ethics on 23 February 2017.

Please refer to:

Image credit: Carolien Coenen, via flickr

November 2016 marked the five hundredth anniversary of the publication of Utopia by Thomas More in Leuven through the efforts of his friend and fellow Humanist, Desiderius Erasmus.

More is primarily remembered today for this work, which sought to show how a better society might be built by learning from the experience of other societies.

It was published shortly before he entered into the service of King Henry VIII, who liked Utopia. And as the monarch notoriously struggled to assert England’s sovereignty over the Pope, More proved to be a critical supporter, eventually rising to the rank of “Lord Chancellor,” his legal advisor.

Nevertheless, within a few years More was condemned to death for refusing to acknowledge the King’s absolute authority over the Pope. According to the Oxford English Dictionary, More introduced “integrity”—in the sense of “moral integrity” or “personal integrity”—into English while awaiting execution. Specifically, he explained his refusal to sign the “Oath of Supremacy” of the King over the Pope by his desire to preserve the integrity of his reputation.

To today’s ears this justification sounds somewhat self-serving, as if More were mainly concerned with what others would think of him. However, More lived at least two centuries before the strong modern distinction between the public and the private person was in general use.

He was getting at something else, which is likely to be of increasing relevance in our “postmodern” world, which has thrown into doubt the very idea that we should think of personal identity as a matter of self-possession in the exclusionary sense which has animated the private-public distinction. It turns out that the pre-modern More is on the side of the postmodernists.

We tend to think of “modernization” as an irreversible process, and in some important respects it seems to be. Certainly our lives have come be organized around technology and its attendant virtues: power, efficiency, speed. However, some features of modernity—partly as an unintended consequence of its technological trajectory—appear to be reversible. One such feature is any strong sense of what is private and public—something to which any avid user of social media can intuitively testify.

More proves to be an interesting witness here because while he had much to say about conscience, he did not presume the privacy of conscience. On the contrary, he judged someone to be a person of “good conscience” if he or she listened to the advice of trusted friends, as he had taken Henry VIII to have been prior to his issuing the Oath of Supremacy. This is quite different from the existentially isolated conception of conscience that comes into play during the Protestant Reformation, on which subsequent secular appeals to conscience in the modern era have been based.

For More, conscience is a publicly accessible decision-making site, the goodness of which is to be judged in terms of whether the right principles have been applied in the right way in a particular case. The platform for this activity is an individual human being who—perhaps by dint of fate—happens to be hosting the decision. However, it is presumed that the same decision would have been reached, regardless of the hosting individual. Thus, it makes sense for the host to consult trusted friends, who could easily imagine themselves as the host.

What is lacking from More’s analysis of conscience is a sense of its creative and self-authorizing character, a vulgarized version of which features in the old Frank Sinatra standard, “My Way.” This is the sense of self-legislation which Kant defined as central to the autonomous person in the modern era. It is a legacy of Protestantism, which took much more seriously than Catholicism the idea that humans are created “in the image and likeness of God.” In effect, we are created to be creators, which is just another way of saying that we are unique among the creatures in possessing “free will.”

To be sure, whether our deeds make us worthy of this freedom is for God alone to decide. Our fellows may well approve of our actions but we—and they—may be judged otherwise in light of God’s moral bookkeeping. The modern secular mind has inherited from this Protestant sensibility an anxiety—a “fear and trembling,” to recall Kierkegaard’s echo of St. Paul—about our fate once we are dead. This sense of anxiety is entirely lacking in More, who accepts his death serenely even though he has no greater insight into what lies in store for him than the Protestant Reformers or secular moderns.

Understanding the nature of More’s serenity provides a guide for coming to terms with the emerging postmodern sense of integrity in our data-intensive, computer-mediated world. More’s personal identity was strongly if not exclusively tied to his public persona—the totality of decisions and actions that he took in the presence of others, often in consultation with them. In effect, he engaged throughout his life in what we might call a “critical crowdsourcing” of his identity. The track record of this activity amounts to his reputation, which remains in open view even after his death.

The ancient Greeks and Romans would have grasped part of More’s modus operandi, which they would understand in terms of “fame” and “honour.” However, the ancients were concerned with how others would speak about them in the future, ideally to magnify their fame and honour to mythic proportions. They were not scrupulous about documenting their acts in the sense that More and we are. On the contrary, the ancients hoped that a sufficient number of word-of-mouth iterations over time might serve to launder their acts of whatever unsavoury character that they may have originally had.

In contrast, More was interested in people knowing exactly what he decided on various occasions. On that basis they could pass judgement on his life, thereby—so he believed—vindicating his reputation. His “integrity” thus lay in his life being an open book that could be read by anyone as displaying some common narrative threads that add up to a conscientious person. This orientation accounts for the frequency with which More and his friends, especially Erasmus, testified to More’s standing as a man of good conscience in whatever he happened to say or do. They contributed to his desire to live “on the record.”

More’s sense of integrity survives on Facebook pages or Twitter feeds, whenever the account holders are sufficiently dedicated to constructing a coherent image of themselves, notwithstanding the intensity of their interaction with others. In this context, “privacy” is something quite different from how it has been understood in modernity. Moderns cherish privacy as an absolute right to refrain from declaration in order to protect their sphere of personal freedom, access to which no one— other than God, should he exist—is entitled. For their part, postmoderns interpret privacy more modestly as friendly counsel aimed at discouraging potentially self-harming declarations. This was also More’s world.

More believed that however God settled his fate, it would be based on his public track record. Unlike the Protestant Reformers, he also believed that this track record could be judged equally by humans and by God. Indeed, this is what made More a Humanist, notwithstanding his loyalty to the Pope unto death.

Yet More’s stance proved to be theologically controversial for four centuries, until the Catholic Church finally made him the patron saint of politicians in 1935. Perhaps More’s spiritual patronage should be extended to cover social media users.

Justin Cruickshank at the University of Birmingham was kind enough to alert me to Steve Fuller’s talk “Transhumanism and the Future of Capitalism”—held by The Philosophy of Technology Research Group—on 11 January 2017.

Author Information: Steve Fuller, University of Warwick,


Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

Please refer to:


Image credit: Der Robert, via flickr

The Oxford Dictionary made ‘post-truth’ word of the year for 2016. Here is the definition, including examples of usage:

Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief:

‘in this era of post-truth politics, it’s easy to cherry-pick data and come to whatever conclusion you desire’

‘some commentators have observed that we are living in a post-truth age’

In STS terms, this definition is clearly ‘asymmetrical’ because it is pejorative, not neutral. It is a post-truth definition of ‘post-truth’. It is how those dominant in the epistemic power game want their opponents to be seen. In my recent symmetrical exposition of ‘post-truth’ for the Guardian, I suggested that the Oxford Dictionary’s definition speaks the lion’s truth, which tries to create as much moral and epistemic distance as possible from whatever facsimile of the truth the fox might be peddling. Thus, the fox—but not the lion—is portrayed as distorting the facts and appealing to emotion. Yet, the lion’s truth appears to the fox as simplistically straightforward and heavy-handed, often delivered in a fit of righteous indignation. Indeed, this classic portrayal of the lion/fox divide may better apply to the history of science than the history of politics.

For better or worse, STS recoiled from the post-truth worldview in 2004, when Bruno Latour famously waved the white flag in the Science Wars, which had been raging for nearly fifteen years—starting with the post-Cold War reassessment of public funding for science. Latour’s terms of surrender were telling. After all, he was the one who extended the symmetry principle from the Edinburgh School’s treatment of all human factors—regardless of whether we now deem them to have been ‘good’ and ‘bad’—to include all non-human factors as well. However, Latour hadn’t anticipated that symmetry applied not only to the range of objects studied but also the range of agents studying them.

Somewhat naively, Latour seemed to think that a universalization of the symmetry principle would make STS the central node in a universal network of those studying ‘technoscience’. Instead, everyone started to apply the symmetry principle for themselves, which led to rather cross-cutting networks and unexpected effects, especially once the principle started to be wielded by creationists, climate sceptics and other candidates for an epistemic ‘basket of deplorables’. And by turning symmetry to their advantages, the deplorables got results, at least insofar as the balance of power has gradually tilted more in their favour—again, for better or worse.

My own view has always been that a post-truth world is the inevitable outcome of greater epistemic democracy. In other words, once the instruments of knowledge production are made generally available—and they have been shown to work—they will end up working for anyone with access to them. This in turn will remove the relatively esoteric and hierarchical basis on which knowledge has traditionally acted as a force for stability and often domination. The locus classicus is the Republic, in which Plato promotes what in the Middle Ages was called a ‘double truth’ doctrine – one for the elites (which allows them to rule) and one for the masses (which allows them to be ruled).

Of course, the cost of making the post-truth character of knowledge so visible is that it also exposes a power dynamics that may become more intense and ultimately destructive of the social order. This was certainly Plato’s take on democracy’s endgame. In the early modern period, this first became apparent with the Wars of Religion that almost immediately broke out in Europe once the Bible was made readily available. (Francis Bacon and others saw in the scientific method a means to contain any such future conflict by establishing a new epistemic mode of domination.) While it is possible to defer democracy by trying to deflect attention from the naked power dynamics, as Latour does, with fancy metaphysical diversions and occasional outbursts in high dudgeon, those are leonine tactics that only serve to repress STS’s foxy roots. In 2017, we should finally embrace our responsibility for the post-truth world and call forth our vulpine spirit to do something unexpectedly creative with it.

The hidden truth of Aude sapere (Kant’s ‘Dare to know’) is Audet adipiscitur (Thucydides’ ‘Whoever dares, wins’).


Image credit: Mike Licht, via flickr

Editor’s Note: The following is a slightly abridged version of Steve Fuller’s article “Science has always been a bit ‘post-truth’” that appeared in The Guardian on 15 December 2016.

Even today, more than fifty years after its first edition, Thomas Kuhn’s The Structure of Scientific Revolutions remains the first port of call to learn about the history, philosophy or sociology of science. This is the book famous for talking about science as governed by ‘paradigms’ until overtaken by ‘revolutions’.

Kuhn argued that the way that both scientists and the general public need to understand the history of science is ‘Orwellian’. He is alluding to 1984, in which the protagonist’s job is to rewrite newspapers from the past to make it seem as though the government’s current policy is where it had been heading all along. In this perpetually airbrushed version of history, the public never sees the U-turns, switches of allegiance and errors of judgement that might cause them to question the state’s progressive narrative. Confidence in the status quo is maintained and new recruits are inspired to follow in its lead. Kuhn claimed that what applies to totalitarian 1984 also applies to science united under the spell of a paradigm.

What makes Kuhn’s account of science ‘post-truth’ is that truth is no longer the arbiter of legitimate power but rather the mask of legitimacy that is worn by everyone in pursuit of power. Truth is just one more – albeit perhaps the most important – resource in a power game without end. In this respect, science differs from politics only in that the masks of its players rarely drop.

The explanation for what happens behind the masks lies in the work of the Italian political economist Vilfredo Pareto (1848-1923), devotee of Machiavelli, admired by Mussolini and one of sociology’s forgotten founders. Kuhn spent his formative years at Harvard in the late 1930s when the local kingmaker, biochemist Lawrence Henderson, not only taught the first history of science courses but also convened an interdisciplinary ‘Pareto Circle’ to get the university’s rising stars acquainted with the person he regarded as Marx’s only true rival.

For Pareto, what passes for social order is the result of the interplay of two sorts of elites, which he called, following Machiavelli, ‘lions’ and ‘foxes’. The lions acquire legitimacy from tradition, which in science is based on expertise rather than lineage or custom. Yet, like these earlier forms of legitimacy, expertise derives its authority from the cumulative weight of intergenerational experience. This is exactly what Kuhn meant by a ‘paradigm’ in science – a set of conventions by which knowledge builds in an orderly fashion to complete a certain world-view established by a founding figure – say, Newton or Darwin. Each new piece of knowledge is anointed by a process of ‘peer review’.

As in 1984, the lions normally dictate the historical narrative. But on the cutting room floor lies the activities of the other set of elites, the foxes. In today’s politics of science, they are known by a variety of names, ranging from ‘mavericks’ to ‘social constructivists’ to ‘pseudoscientists’. Foxes are characterised by dissent and unrest, thriving in a world of openness and opportunity. (Read more …)

Author Information: Eugene Loginov, Moscow State University,

Loginov, Eugene. “Steve Fuller on Proofs for God’s Existence: An Interview.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 1-3.

The PDF of the article gives specific page numbers. Shortlink:

Editor’s Note: A philosophy student at Moscow State University, Eugene Loginov, recently interviewed Steve Fuller on his views about arguments concerning the existence of God. The interview will be published in Russian in the philosophy magazine, Date-Palm Compote. Below are Loginov’s questions and Fuller’s responses.


Image credit: Tom Davidson, via flickr

Eugene Loginov (EL): What is your position regarding the general idea of making arguments for the existence of God? Do you think it to be valid at all? Why?

Steve Fuller (SF): I think that arguments for the existence of God are among the most psychologically revealing philosophical projects that one can engage in. This is especially true of ‘God’ in the Abrahamic religions, in whose ‘image and likeness’ humans are supposedly created. The sort of arguments that people find persuasive for the existence of God says something deep about the nature of their own connection with the world. For example, the more secure we feel about our place in the cosmos, the more persuasive the ontological argument will seem, since it is based on faith in the workings of our own minds. I identify this orientation with a broadly ‘Augustinian’ approach to Christianity, which stresses the overlap between human and divine being in terms of access to the logos: God creates by the Word and we can understand through the Word.

(EL): If you tried to prove God’s existence (or to make a claim against its existence), what definition of the notion of “God” would you use? Do you think that the classic definition of “God” as “the all-good, omniscient and omnipotent creator of the world” is still the suitable one?

(SF): I would go with the idea of God that I find in Duns Scotus, and Leibniz namely, that God is the transcendental optimizer of all the virtues. In other words, God is not merely all good, all powerful, etc. After all, any of one of those qualities taken to the extreme may be incompatible with the others—and may turn out to result in more bad than good. (Think of what might happen to humans if God were a ruthlessly efficient superintelligent computer.) It follows that God contains all the virtues in a way that enables them to cohere together in his person to maximum overall positive effect—a convergence to a ‘divine singularity’, if you will, or what Pierre Teilhard de Chardin called the ‘Omega Point’. However, it may not be obvious what such a transcendental optimizer would look like, since such a God would be constituted in a way which appears—at least from a human perspective—to involve trade-offs between the virtues.

(EL): Which of the various arguments for God’s existence (or claims against its, or His, existence) you regard as the most valid and/or the most interesting one?

(SF): My answer to [your second question] (see above) is a version of the ontological argument, which I believe is the most intellectually interesting and challenging argument for God’s existence—because it basically makes our own existence (at least as thinking beings) co-dependent with God’s existence. Philosophers tend to focus on whether the ontological argument is valid, when in fact they should pay attention to the consequences if it turns out to be invalid. More than simply the existence of God is at stake. The constitution of our own minds is also on trial here. Partly influenced by Kant, Darwin believed that humans were unique as a species—due to our overdeveloped cerebral cortex—in taking its own ideas seriously even if they lack any direct relation to empirical reality. He believed that this liability (at least from an evolutionary standpoint) resulted in brutal intra-species wars and might ultimately lead to the extinction of the human species altogether. For Darwin, ‘God’ was clearly one such idea, especially when defended by the ontological argument.

(EL): What are your thoughts regarding significance of demonstrations of God’s existence (or claims against its, or His, existence) in history of philosophy, science, religion and culture in general?

(SF): The best way to answer the question is to consider what happens when arguments for the existence of God are not taken seriously. The first thing that happens is that belief in God goes underground. In other words, God becomes something whose existence is implicitly affirmed or denied but does not make a material difference to other propositions that one might believe or defend. The second thing that happens is that the ‘hole’ in public discourse formerly filled by God talk becomes colonized by, on the one hand, humans-as-gods and, on the other hand, an outright denial of the order and goodness to reality that a rational belief in God was supposed to underwrite. So there are seriously value implications for denying the seriousness of arguments for God’s existence.

(EL): There is a widely held opinion that Kant’s critique of the arguments he was aware of was so devastating, that the very question of making arguments for the existence of God ceased to be philosophically relevant. Do you agree? Why?

(SF): As a matter of historical fact, Kant dealt a serious blow to formal arguments for the existence of God, since he basically diagnosed all of them as pathologies of reason of one sort or another. As I said in answer to [your third question] (see above), this opened the door to Darwin’s diminished view of human cognitive aspirations. However, it is worth pointing out that much of 19th century philosophy of science—I think here especially of William Whewell and Charles Sanders Peirce—stressed the ‘pragmatic’ side of Kant’s position, which accepted the motivational role that God’s existence played in driving science towards a unified worldview and conferring on humans a sense of purpose more generally.

I would also observe that Kant seems to have thought that any attempt to prove the existence of God must start by imagining ourselves to be radically different from God, and so the point of the ‘proof’ would be to gain epistemic access to this ‘other’ being called ‘God. However, the Cartesian tradition (including Malebranche and Leibniz) does not presume that sense of radical difference. In other words, these rationalists took rather literally the idea that we are already equipped to access the ‘Mind of God’. This effectively modernizes Augustine, which later philosophers further secularized as the ‘a priori’ and ‘innate ideas’. However, the challenge—already recognized by Augustine—is how to translate God’s infinite and transcendental status into our necessarily finite and temporal understanding of things.

Perhaps the most concrete expression of this challenge occurs over the ‘problem of evil’, the subject matter of theodicy, which queries God’s apparent tolerance or indifference to the world’s massive harms and imperfections. It was in this context that arguments for God’s existence based on ‘intelligent design’ (i.e. a deeper design than would appear at first glance) were developed in the 18th century, culminating in the work of William Paley, whose natural theology famously drove Darwin away from a belief in God.

(EL): What text (or texts) is in your opinion the most important one (or ones) for understanding the problematic in question?

(SF): Interestingly, I don’t think there is a single book that really discusses classic arguments for the existence in God in all their historical, philosophical and sociological richness. However, I recommend the works of Peter Harrison, as one contemporary historian and philosopher of science who shows repeatedly how key doctrines relating to a belief in the existence of God—such as the need for a personal encounter with the Bible and the doctrine of Original Sin—operated as what Imre Lakatos would have called as ‘positive heuristic’ in facilitating the inquiring mind during the 17th century Scientific Revolution.

Author Information: Steve Fuller, University of Warwick,


Editor’s Note:


Image credit: Lorraine Murphy, via flickr

Three facts are striking about the US presidential election:

1. Hillary Clinton won the popular vote, though she lost the Electoral College, which decides the presidency.

2. Voter turnout was much lower than initially expected, and this meant that especially Black voters—who overwhelmingly backed Clinton—came out in smaller numbers.

3. The pollsters got it wrong, and they especially got it wrong in places where the people who live there are most unlike themselves. And those people overwhelmingly voted for Trump. They’ve been called ‘silent voters’ in this election. Richard Nixon, following a rather similarly surprising victory in 1968, famously called them the ‘silent majority’.

This doesn’t look to me like populism but a loss of faith in democracy. And here perhaps the most brilliant move of the Trump campaign was to declare that the vote was rigged before most of the votes had even been cast. This effectively discouraged the people who had most relied on the ballot box as their means to salvation from casting their vote. It also added to the cynical ‘politics as usual’ attitude that Trump had sown by portraying Clinton as standing for everything that’s wrong with the federal government. However, the people who supported Trump weren’t necessarily great believers in democracy, given their high tolerance for Trump’s anti-democratic statements (even if eventually modified or reversed). What Trump’s supporters liked about their man was his resolve—and his seeming ability—to get things done, by whatever means it takes.

The moral of the 2016 election then is that democracy itself—especially in the complex representational form that it takes in the United States—is the big loser. Like Brexit, the Trump phenomenon was made possible by a rage that doesn’t add up to a positive plan of action. But much more explicitly than Brexit, which actually was brought about by an opening up of democratic processes (through the use of referendum), the 2016 US presidential election was a vote against the democratic system itself—both in terms of who voted and who didn’t vote.

The pollsters got all this wrong perhaps because they mistakenly presumed that the voters shared their own and the political class’s belief that their problems can in principle be solved at the ballot box. It will be interesting to see just how much Trump is tempted to fiddle with the US Constitution. Watch out especially for his Supreme Court nominees, who are capable of doing the most long-term damage to the system. In any case, it should give pause to those of us who still believe in democratic processes about whether much is to be gained by staging mass protests saying ‘Trump is not my president!’ and promising endless resistance to whatever Trump does. It seems to me that this will only reinforce the view of Trump supporters that democracy is a broken system and requires still more radical remedies. But it is not at all clear how true believers in democracy go from there.