Archives For Steve Fuller

Author Information: Ben Ross, University of North Texas, benjamin.ross@my.unt.edu

Ross, Ben. “Between Poison and Remedy: Transhumanism as Pharmakon.Social Epistemology Review and Reply Collective 6, no. 5 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3zU

Please refer to:

Image credit: Jennifer Boyer, via flickr

As a Millennial, I have the luxury of being able to ask in all seriousness, “Will I be the first generation safe from death by old age?” While the prospects of answering in the affirmative may be dim, they are not preposterous. The idea that such a question can even be asked with sincerity, however, testifies to transhumanism’s reach into the cultural imagination.

But what is transhumanism? Until now, we have failed to answer in the appropriate way, remaining content to describe its possible technological manifestations or trace its historical development. Therefore, I would like to propose an ontology of transhumanism. When philosophers speak of ontologies, they are asking a basic question about the being of a thing—what is its essence? I suggest that transhumanism is best understood as a pharmakon.

Transhumanism as a Pharmakon

Derrida points out in his essay “Plato’s Pharmacy” that while pharmakon can be translated as “drug,” it means both “remedy” and “poison.” It is an ambiguous in-between, containing opposite definitions that can both be true depending on the context. As Michael Rinella notes, hemlock, most famous for being the poison that killed Socrates, when taken in smaller doses induces “delirium and excitement on the one hand,” yet it can be “a powerful sedative on the other” (160). Rinella also goes on to say that there are more than two meanings to the term. While the word was used to denote a drug, Plato “used pharmakon to mean a host of other things, such as pictorial color, painter’s pigment, cosmetic application, perfume, magical talisman, and recreational intoxicant.” Nevertheless, Rinella makes the crucial remark that “One pharmakon might be prescribed as a remedy for another pharmakon, in an attempt to restore to its previous state an identity effaced when intoxicant turned toxic” (237-238). It is precisely this “two-in-one” aspect of the application of a pharmakon that reveals it to be the essence of transhumanism; it can be both poison and remedy.

To further this analysis, consider “super longevity,” which is the subset of transhumanism concerned with avoiding death. As Harari writes in Homo Deus, “Modern science and modern culture…don’t think of death as a metaphysical mystery…for modern people death is a technical problem that we can and should solve.” After all, he declares, “Humans always die due to some technical glitch” (22). These technical glitches, i.e. when one’s heart ceases to pump blood, are the bane of researchers like Aubrey de Grey, and fixing them forms the focus of his “Strategies for Engineered Negligible Senescence.” There is nothing in de Grey’s approach to suggest that there is any human technical problem that does not potentially have a human technical solution. Grey’s techno-optimism represents the “remedy-aspect” of transhumanism as a view in which any problems—even those caused by technology—can be solved by technology.

As a “remedy,” transhumanism is based on a faith in technological progress, despite such progress being uneven, with beneficial effects that are not immediately apparent. For example, even if de Grey’s research does not result in the “cure” for death, his insight into anti-aging techniques and the resulting applications still have the potential to improve a person’s quality of life. This reflects Max More’s definition of transhumanism as “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (3).

Importantly, More’s definition emphasizes transcendent enhancement, and it is this desire to be “upgraded” which distinguishes transhumanism. An illustration of the emergence of the upgrade mentality can be seen in the history of plastic surgery. Harari writes that while modern plastic surgery was born during the First World War as a treatment to repair facial injuries, upon the war’s end, surgeons found that the same techniques could be applied not to damaged noses, but to “ugly” ones, and “though plastic surgery continued to help the sick and wounded…it devoted increasing attention to upgrading the healthy” (52). Through its secondary use as an elective surgery of enhancement rather than exclusively as a technique for healing, one can see an example of the evolution of transhumanist philosophy out of medical philosophy—if the technology exists to change one’s face (and they have they money for it), a person should be morphologically free to take advantage of the enhancing capabilities of such a procedure.

However, to take a view of a person only as “waiting to be upgraded” marks the genesis of the “poison-aspect” of transhumanism as a pharmakon. One need not look farther than Martin Heidegger to find an account of this danger. In his 1954 essay, “The Question Concerning Technology,” Heidegger suggests that the threat of technology is ge-stell, or “enframing,” the way in which technology reveals the world to us primarily as a stock of resources to be manipulated. For him, the “threat” is not a technical problem for which there is a technical solution, but rather it is an ontological condition from which we can be saved—a condition which prevents us from seeing the world in any other way. Transhumanism in its “poison mode,” then, is the technological understanding of being—a singular way of viewing the world as a resource waiting to be enhanced. And what is problematic is that this way of revealing the world comes to dominate all others. In other words, the technological understanding of being comes to be the understanding of being.

However, a careful reading of Heidegger’s essay suggests that it is not a techno-pessimist’s manifesto. Technology has pearls concealed within its perils. Heidegger suggests as much when he quotes Hölderlin, “But where danger is, grows the saving power also” (333). Heidegger is asking the reader to avoid either/or dichotomous thinking about the essence of technology as something that is either dangerous or helpful, and instead to see it as a two-in-one. He goes to great lengths to point out that the “saving power” of technology, which is to say, of transhumanism, is that its essence is ambiguous—it is a pharmakon. Thus, the self-same instrumentalization that threatens to narrow our understanding of being also has the power to save us and force a consideration of new ways of being, and most importantly for Heidegger, new meanings of being.

Curing Death?

A transhumanist, and therefore pharmacological, take on Heidegger’s admonishment might be something as follows: In the future it is possible that a “cure” for death will threaten what we now know as death as a source of meaning in society—especially as it relates to a Christian heaven in which one yearns to spend an eternity, sans mortal coil. While the arrival of a death-cure will prove to be “poison” for a traditional understanding of Christianity, that same techno-humanistic artifact will simultaneously function as a “remedy,” spurring a Nietzschean transvaluation of values—that is, such a “cure” will arrive as a technological Zarathustra, forcing a confrontation with meaning, bringing news that “the human being is something that must be overcome” and urging us to ask anew, “what have you done to overcome him?” At the very least, as Steve Fuller recently pointed out in an interview, “transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection.” For those sympathetic to Leon Kass’ brand of repugnance, such suggestions are poison, and yet for a transhumanist such suggestions are a remedy to the glitch called death and the ways in which we relate to our finitude.

A more mundane example of the simultaneous danger and saving power of technology might be the much-hyped Google Glass—or in more transhuman terms, having Google Glass implanted into one’s eye sockets. While this procedure may conceal other ways of understanding the spaces and people surrounding the wearer other than through the medium of the lenses, the lenses simultaneously have the power to reveal entirely new layers of information about the world and connect the wearer to the environment and to others in new ways.

With these examples it is perhaps becoming clear that by re-casting the essence of transhumanism as a pharmakon instead of an either/or dichotomy of purely techno-optimistic panacea or purely techno-pessimistic miasma, a more inclusive picture of transhumanist ontology emerges. Transhumanism can be both—cause and cure, danger and savior, threat and opportunity. Max More’s analysis, too, has a pharmacological flavor in that transhumanism, though committed to improving the human condition, has no illusions that, “The same powerful technologies that can transform human nature for the better could also be used in ways that, intentionally or unintentionally, cause direct damage or more subtly undermine our lives” (4).

Perhaps, then, More might agree that as a pharmakon, transhumanism is a Schrödinger’s cat always in a state of superposition—both alive and dead in the box. In the Copenhagen interpretation, a system stops being in a superposition of states and becomes either one or the other when an observation takes place. Transhumanism, too, is observer-dependent. For Ray Kurzweil, looking in the box, the cat is always alive with the techno-optimistic possibility of download into silicon and the singularity is near. For Ted Kaczynski, the cat is always dead, and it is worth killing in order to prevent its resurrection. Therefore, what the foregoing analysis suggests is that transhumanism is a drug—it is both remedy and poison—with the power to cure or the power to kill depending on who takes it. If the essence of transhumanism is elusive, it is precisely because it is a pharmakon cutting across categories ordinarily seen as mutually exclusive, forcing an ontological quest to conceptualize the in-between.

References

Derrida, Jacques. “Plato’s Pharmacy.” In Dissemination, translated by Barbara Johnson, 63-171. Chicago: University of Chicago Press, 1981.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. HarperCollins, 2017.

Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell. Harper & Row, 1977.

More, Max. “The Philosophy of Transhumanism,” In The Transhumanist Reader, edited by Max More and Natasha Vita-More, 3-17. Malden, MA: Wiley-Blackwell, 2013.

Rinella, Michael A. Pharmakon: Plato, Drug Culture, and Identity in Ancient Athens. Lanham, MD: Lexington Books, 2010.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Counterfactuals in the White House:  A Glimpse into Our Post-Truth Times.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3z1

Image credit: OZinOH, via flickr

May Day 2017 was filled with reporting and debating over a set of comments that US President Trump made while visiting Andrew Jackson’s mansion, the ‘Hermitage’, now a tourist attraction in Nashville, Tennessee. Trump said that had Jackson been deployed, he could have averted the US Civil War. Since Jackson had died about fifteen years before the war started, Trump was clearly making a counterfactual claim. However, it is an interesting claim—not least for its responses, which were fast and furious. They speak to the nature of our times.  Let me start with the academic response and then move to how I think about the matter. A helpful compendium of the responses is here.

Jim Grossman of the American Historical Association spoke for all by claiming that Trump ‘is starting from the wrong premise’. Presumably, Grossman means that the Civil War was inevitable because slavery is so bad that a war over it was inevitable. However well he meant this comment, it feeds into the anti-expert attitude of our post-truth era. Grossman seems to disallow Trump from imagining that preserving the American union was more important than the end of slavery—even though that was exactly how the issue was framed to most Americans 150 years ago. Scholarship is of course mainly about explaining why things happened the way they did. However, there is a temptation to conclude that it necessarily had to happen that way. Today’s post-truth culture attempts to curb this tendency. In any case, once the counterfactual door is open to other possible futures, historical expertise becomes more contestable, perhaps even democratised. The result may be that even when non-experts reach the same conclusion as the experts, it may be for importantly different reasons.

Who was Andrew Jackson?

Andrew Jackson is normally regarded as one of the greatest US presidents, whose face is regularly seen on the twenty-dollar banknote. He was the seventh president and the first one who was truly ‘self-made’ in the sense that he was not well educated, let alone oriented towards Europe in his tastes, as had been his six predecessors. It would not be unfair to say that he was the first President who saw a clear difference between being American and being European. In this respect, his self-understanding was rather like that of the heroes of Latin American independence. He was also given to an impulsive manner of public speech, not so different from the current occupant of the Oval Office.

Jackson volunteered at age thirteen to fight in the War of Independence from Britain, which was the first of many times when he was ready to fight for his emerging nation. Over the past fifty years much attention has been paid to his decimation of native American populations at various points in his career, both military and presidential, as well as his support for slavery. (Howard Zinn was largely responsible, at least at a popular level, for this recent shift in focus.) To make a long and complicated story short, Jackson was rather consistent in acting in ways that served to consolidate American national identity, even if that meant sacrificing the interests of various groups at various times—groups that arguably never recovered from the losses inflicted on them.

Perhaps Jackson’s most lasting positive legacy has been the current two-party—Democratic/Republican—political structure. Each party cuts across class lines and geographical regions. This achievement is now easy to underestimate—as the Democratic Party is now ruing. The US founding fathers were polarized about the direction that the fledgling nation should take, precisely along these divides. The struggles began in Washington’s first administration between his treasury minister Alexander Hamilton and his foreign minister Thomas Jefferson—and they persisted. Both Hamilton and Jefferson oriented themselves to Europe, Hamilton more in terms of what to imitate and Jefferson in terms of what to avoid. Jackson effectively performed a Gestalt switch, in which Europe was no longer the frame of reference for defining American domestic and foreign policy.

Enter Trump

Now enter Donald Trump, who says Jackson could have averted the Civil War, which by all counts was one of the bloodiest in US history, with an estimated two million lives in total lost. Jackson was clearly a unionist but also clearly a slaveholder. So one imagines that Jackson would have preserved the union by allowing slaveholding, perhaps in terms of some version of the ‘states rights’ or ‘popular sovereignty’ doctrine, which gives states discretion over how they deal with economic matters. It’s not unreasonable that Jackson could have pulled that off, especially because the economic arguments for allowing slavery were stronger back then than they are now normally remembered.

The Nobel Prize winning economic historian Robert Fogel explored this point quite thoroughly more than forty years ago in his controversial Time on the Cross. It is not a perfect work, and its academic criticism is quite instructive about how one might improve exploring a counterfactual world in which slavery would have persisted in the US until it was no longer economically viable. Unfortunately, the politically sensitive nature of the book’s content has discouraged any follow-up. When I first read Fogel, I concluded that over time the price of slaves would come to approximate that of free labour considered over a worker’s lifetime. In other words, a slave economy would evolve into a capitalist economy without violence in the interim. Slaveholders would simply respond to changing market conditions. So, the moral question is whether it would have made sense to extend slavery over a few years before it would end up merging with what the capitalist world took to be an acceptable way of being, namely, wage labour. Fogel added ballast to his argument by observing that slaves tend to live longer and healthier lives than freed Blacks.

Moreover, Fogel’s counterfactual was not fanciful. Some version of the states rights doctrine was the dominant sentiment in the US prior to the Civil War. However, there were many different versions of the doctrine which could not rally around a common spokesperson. This allowed the clear unitary voice for abolition emanating from the Christian dissenter community in the Northern states to exert enormous force, not least on the sympathetic and ambitious country lawyer, Abraham Lincoln, who became their somewhat unlikely champion. Thus, 1860 saw a Republican Party united around Lincoln fend off three Democrat opponents in the general election.

None of this is to deny that Lincoln was right in what he did. I would have acted similarly. Moreover, he probably did not anticipate just how bloody the Civil War would turn out to be—and the lasting scars it would leave on the American psyche. But the question on the table is not whether the Civil War was a fair price to pay to end slavery. Rather, the question is whether the Civil War could have been avoided—and, more to the point of Trump’s claim, whether Jackson would have been the man to do it. The answer is perhaps yes. The price would have been that slavery would have been extended for a certain period before it became economically unviable for the slaveholders.

It is worth observing that Fogel’s main target seemed to be Marxists who argued that slavery made no economic sense and that it persisted in the US only because of racist ideology.  Fogel’s response was that slaveholders probably were racist, but such a de facto racist economic regime would not have persisted as long as it did, had both sides not benefitted from the arrangement. In other words, the success of the anti-slavery campaign was largely about the triumph of aspirational ideas over actual economic conditions. If anything, its success testifies to the level of risk that abolitionists were willing to assume on behalf of American society for the emancipation of slaves. Alexis de Tocqueville was only the most famous of foreign US commentators to notice this at the time. Abolitionists were the proactionaries of their day with regard to risk. And this is how we should honour them now.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).

Shortlink: http://wp.me/p1Bfg0-3yV

Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the University of Warwick. He is the author of more than twenty books, the next of which is Post-Truth: Knowledge as a Power Game (Anthem).

Shortlink: http://wp.me/p1Bfg0-3yI

Note: This article originally appeared in the EASST Review 36(1) April 2017 and is republished below with the permission of the editors.

Image credit: Hans Luthart, via flickr

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public—if not outright invented—at least four common post-truth tropes:

1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.

2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.

3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.

4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable—but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game—perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.

References

Fuller, Steve. ‘Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.’ Social Epistemology Review and Reply Collective December, 2016: http://wp.me/p1Bfg0-3nx.

Latour, Bruno. ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.’ Critical Inquiry 30, no. 2 (2004) : 225–248.

Oreskes, Naomi and Erik M. Conway Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2011.

Sismondo, Sergio. ‘Post-Truth?’ Social Studies of Science 47, no. 1 (2017): 3-6.

The following are a set of questions concerning the place of transhumanism in the Western philosophical tradition that Robert Frodeman’s Philosophy 5250 class at the University of North Texas posed to Steve Fuller, who met with the class via Skype on 11 April 2017.

Shortlink: http://wp.me/p1Bfg0-3yl

Image credit: Joan Sorolla, via flickr

1. First a point of clarification: we should understand you not as a health span increaser, but rather as interested in infinity, or in some sense in man becoming a god? That is, H+ is a theological rather than practical question for you?

Yes, that’s right. I differ from most transhumanists in stressing that short term sacrifice—namely, in the form of risky experimentation and self-experimentation—is a price that will probably need to be paid if the long-term aims of transhumanism are to be realized. Moreover, once we finally make the breakthrough to extend human life indefinitely, there may be a moral obligation to make room for future generations, which may take the form of sending the old into space or simply encouraging suicide.

2. How do you understand the relationship between AI and transhumanism?

When Julian Huxley coined ‘transhumanism’ in the 1950s, it was mainly about eugenics, the sort of thing that his brother Aldous satirized in Brave New World. The idea was that the transhuman would be a ‘new and improved’ human, not so different from new model car. (Recall that Henry Ford is the founding figure of Brave New World.) However, with the advent of cybernetics, also happening around the same time, the idea that distinctly ‘human’ traits might be instantiated in both carbon and silicon began to be taken seriously, with AI being the major long-term beneficiary of this line of thought. Some transhumanists, notably Ray Kurzweil, find the AI version especially attractive, perhaps because it caters to their ‘gnostic’ impulse to have the human escape all material constraints. In the transhumanist jargon, this is called ‘morphological freedom’, a sort of secular equivalent of pure spirituality. However, this is to take AI in a somewhat different direction from its founders in the era of cybernetics, which was about creating intelligent machines from silicon, not about transferring carbon-based intelligence into silicon form.

3. How seriously do you take talk (by Bill Gates and others) that AI is an existential risk?

Not very seriously— at least on its own terms. By the time some superintelligent machine might pose a genuine threat to what we now regard as the human condition, the difference between human and non-human will have been blurred, mainly via cyborg identities of the sort that Stephen Hawking might end up being seen as having been a trailblazer. Whatever political questions would arise concerning AI at that point would likely divide humanity itself profoundly and not be a simple ‘them versus us’ scenario. It would be closer to the Cold War choice of Communism vs Capitalism. But honestly, I think all this ‘existential risk’ stuff gets its legs from genuine concerns about cyberwarfare. But taken on its face, cyberwarfare is nothing more than human-on-human warfare conducted by high tech means. The problem is still mainly with the people fighting the war rather than the algorithms that they program to create these latest weapons of mass destruction. I wonder sometimes whether this fixation on superintelligent machines is simply an indirect way to get humans to become responsible for their own actions—the sort of thing that psychoanalysts used to call ‘displacement behavior’ but the rest of us call ‘ventriloquism’.

4. If, as Socrates claims, to philosophize is to learn how to die, does H+ represent the end of philosophy?

Of course not!  The question of death is just posed differently because even from a transhumanist standpoint, it may be in the best interest of humanity as a whole for individuals to choose death, so as to give future generations a chance to make their mark. Alternatively, and especially if transhumanists are correct that our extended longevity will be accompanied by rude health, then the older and wiser among us —and there is no denying that ‘wisdom’ is an age-related virtue—might spend their later years taking greater risks, precisely because they would be better able to handle the various contingencies. I am thinking that such healthy elderly folk might be best suited to interstellar exploration because of the ultra-high risks involved. Indeed, I could see a future social justice agenda that would require people to demonstrate their entitlement to longevity by documenting the increasing amount of risk that they are willing to absorb.

5. What of Heidegger’s claim that to be an authentic human being we must project our lives onto the horizon of our death?

I couldn’t agree more! Transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection. I think Heidegger and other philosophers have invested such great import on death simply because of its apparent irreversibility. However, if you want to recreate Heidegger’s sense of ‘ultimate concern’ in a post-death world, all you would need to do is to find some irreversible processes and unrecoverable opportunities that even transhumanists acknowledge. A hint is that when transhumanism was itself resurrected in its current form, it was known as ‘extropianism’, suggesting an active resistance to entropy. For transhumanists—very much in the spirit of the original cybernetician, Norbert Wiener—entropy is the ultimate irreversible process and hence ultimate challenge for the movement to overcome.

6. What is your response to Heidegger’s claim that it is in the confrontation with nothingness, in the uncanny, that we are brought back to ourselves?

Well, that certainly explains the phenomenon that roboticists call the ‘uncanny valley’, whereby people are happy to deal with androids until they resemble humans ‘a bit too much’, at which point people are put off. There are two sides to this response—not only that the machines seem too human but also that they are still recognized as machines. So the machines haven’t quite yet fooled us into thinking that they’re one of us. One hypothesis to explain the revulsion is that such androids appear to be like artificially animated dead humans, a bit like Frankenstein. Heideggerians can of course use all this to their advantage to demonstrate that death is the ultimate ‘Other’ to the human condition.

7. Generally, who do you think are the most important thinkers within the philosophic tradition for thinking about the implications of transhumanism?

Most generally, I would say the Platonic tradition, which has been most profound in considering how the same form might be communicated through different media. So when we take seriously the prospect that the ‘human’ may exist in carbon and/or silicon and yet remain human, we are following in Plato’s footsteps. Christianity holds a special place in this line of thought because of the person of Jesus Christ, who is somehow at once human and divine in equal and all respects. The branch of theology called ‘Christology’ is actually dedicated to puzzling over these matters, various solutions to which have become the stuff of science fiction characters and plots. St Augustine originally made the problem of Christ’s identity a problem for all of humanity when he leveraged the Genesis claim that we are created ‘image and the likeness of God’ to invent the concept of ‘will’ to name the faculty of free choice that is common to God and humans. We just exercise our wills much worse than God exercises his, as demonstrated by Adam’s misjudgment which started Original Sin (an Augustinian coinage). When subsequent Christian thinkers have said that ‘the flesh is weak’, they are talking about how humanity’s default biological conditions holds us back from fully realizing our divine potential. Kant acknowledged as much in secular terms when he explicitly defined the autonomy necessary for truly moral action in terms of resisting the various paths of least resistance put before us. These are what Christians originally called ‘temptations’, Kant himself called ‘heteronomy’ and Herbert Marcuse in a truly secular vein would later call ‘desublimation’.

8. One worry that arises from the Transhumanism project (especially about gene editing, growing human organs in animals, etc.) regards the treatment of human enhancement as “commercial products”. In other words, the worry is concerns the (further) commodification of life. Does this concern you? More generally, doesn’t H+ imply a perverse instrumentalization of our being?

My worries about commodification are less to do with the process itself than the fairness of the exchange relations in which the commodities are traded. Influenced by Locke and Nozick, I would draw a strong distinction between alienation and exploitation, which tends to be blurred in the Marxist literature. Transhumanism arguably calls for an alienation of the body from human identity, in the sense that your biological body might be something that you trade for a silicon upgrade, yet you humanity remains intact on both sides of the transaction, at least in terms of formal legal recognition. Historic liberal objections to slavery rested on a perceived inability to do this coherently. Marxism upped the ante by arguing that the same objections applied to wage labor under the sort of capitalism promoted by the classical political economists of his day, who saw themselves as scientific underwriters of the new liberal order emerging in post-feudal Europe. However, the force of Marxist objections rest on alienation being linked to exploitation. In other words, not only am I free to sell my body or labor, but you are also offer whatever price serves to close the sale. However, the sorts of power imbalances which lay behind exploitation can be—and have been—addressed in various ways. Admittedly more work needs to be done, but a time will come when alienation is simply regarded as a radical exercise of freedom—specifically, the freedom to, say, project myself as an avatar in cyberspace or, conversely, convert part of my being to property that can be traded from something that may in turn enhance my being.

9. Robert Nozick paints a possible scenario in Anarchy, State, and Utopia where he describes a “genetic supermarket” where we can choose our genes just as one selects a frozen pizza. Nozick’s scenario implies a world where human characteristics are treated in the way we treat other commercial products. In the Transhuman worldview, is the principle or ultimate value of life commercial?

There is something to that, in the sense that anything that permits discretionary choice will lend itself to commercialization unless the state intervenes—but I believe that the state should intervene and regulate the process. Unfortunately, from a PR standpoint, a hundred years ago that was called ‘eugenics’. Nevertheless, people in the future may need to acquire a license to procreate, and constraints may even be put on the sort of offspring are and are not permissible, and people may even be legally required to undergo periodic forms of medical surveillance—at least as a condition of employment or welfare benefits. (Think Gattaca as a first pass at this world.) It is difficult to see how an advanced democracy that acknowledges already existing persistent inequalities in life-chances could agree to ‘designer babies’ without also imposing the sort of regime that I am suggesting. Would this unduly restrict people’s liberty? Perhaps not, if people will have acquired the more relaxed attitude to alienation, as per my answer to the previous question. However, the elephant in the room—and which I argued in The Proactionary Imperative is more important—is liability. In other words, who is responsible when things go wrong in a regime which encourages people to experiment with risky treatments? This is something that should focus the minds of lawyers and insurers, especially in a world are presumed to be freer per se because they have freer access to information.

10. Is human enhancement consistent with other ways in which people modify their lifestyles, that is, are they analogous in principle to buying a new cell phone, learning a language or working out? Is it a process of acquiring ideas, goods, assets, and experiences that distinguish one person from another, either as an individual or as a member of a community? If not, how is human enhancement different?

‘Human enhancement’, at least as transhumanists understand the phrase, is about ‘morphological freedom’, which I interpret as a form of ultra-alienation. In other words, it’s not simply about people acquiring things, including prosthetic extensions, but also converting themselves to a different form, say, by uploading the contents of one’s brain into a computer. You might say that transhumanism’s sense of ‘human enhancement’ raises the question of whether one can be at once trader and traded in a way that enables the two roles to be maintained indefinitely. Classical political economy seemed to imply this, but Marx denied its ontological possibility.

11. The thrust of 20th Century Western philosophy could be articulated in terms of the strife for possible futures, whether that future be Marxist, Fascist, or other ideologically utopian schemes, and the philosophical fallout of coming to terms with their successes and failures. In our contemporary moment, it appears as if widespread enthusiasm for such futures has disappeared, as the future itself seems as fragmented as our society. H+ is a new, similar effort; but it seems to be a specific evolution of the futurism focused, not on a society, but on the human person (even, specific human persons). Comments?

In terms of how you’ve phrased your question, transhumanism is a recognizably utopian scheme in nearly all respects—including the assumption that everyone would find its proposed future intrinsically attractive, even if people disagree on how or whether it might be achieved. I don’t see transhumanism as so different from capitalism or socialism as pure ideologies in this sense. They all presume their own desirability. This helps to explain why people who don’t agree with the ideology are quickly diagnosed as somehow mentally or morally deficient.

12. A common critique of Heidegger’s thought comes from an ethical turn in Continental philosophy. While Heidegger understands death to the harbinger of meaning, he means specifically and explicitly one’s own death. Levinas, however, maintains that the primary experience of death that does this work is the death of the Other. One’s experience with death comes to one through the death of a loved one, a friend, a known person, or even through the distant reality of a war or famine across the world. In terms of this critique, the question of transhumanism then leads to a socio-ethical concern: if one, using H+ methods, technologies, and enhancements, can significantly inoculate oneself against the threat of death, how ethically (in the Levinasian sense) can one then legitimately live in relation to others in a society, if the threat of the death of the Other no longer provides one the primal experience of the threat of death?

Here I’m closer to Heidegger than Levinas in terms of grounding intuition, but my basic point would be that an understanding of the existence and significance of death is something that can be acquired without undergoing a special sort of experience. Phenomenologically inclined philosophers sometimes seem to assume that a significant experience must happen significantly. But this is not true at all. My main understanding of death as a child came not from people I know dying, but simply from watching the morning news on television and learning about the daily body count from the Vietnam War. That was enough for me to appreciate the gravity of death—even before I started reading the Existentialists.

Editor’s Note:

    The following are elements of syllabi for a graduate, and an undergraduate, course taught by Robert Frodeman in spring 2017 at the University of North Texas. These courses offers an interesting juxtaposition of texts aimed at reimagining how to perform academic philosophy as “field philosophy”. Field philosophy seeks to address meaningfully, and demonstrably, contemporary public debates, regarding transhumanism for example, given attention to shifting ideas and frameworks of both the Humboldtian university and the “new American” university.

Shortlink: http://wp.me/p1Bfg0-3xB

Philosophy 5250: Topics in Philosophy

Overall Theme

This course continues my project of reframing academic philosophy within the approach and problematics of field philosophy.

In terms of philosophic categories, we will be reading classics in 19th and 20th century continental philosophy: Hegel, Nietzsche, and Heidegger. But we will be approaching these texts with an agenda: to look for insights into a contemporary philosophical controversy, the transhumanist debate. This gives us two sets of readings – our three authors, and material from the contemporary debate surrounding transhumanism.

Now, this does not mean that we will restrict our interest in our three authors to what is applicable to the transhumanist debate; our thinking will go wherever our interests take us. But the topic of transhumanism will be primus inter pares.

Readings

  • Hegel, Phenomenology of Spirit, Preface
  • Hegel, The Science of Logic, selections
  • Heidegger, Being and Time, Division 1, Macquarrie translation
  • Heidegger, ‘The Question Concerning Technology’
  • Nietzsche, selections from Thus Spoke Zarathustra and Beyond Good and Evil

Related Readings

Grading

You will have two assignments, both due at the end of the semester. I strongly encourage you to turn in drafts of your papers.

  • A 2500 word paper on a major theme from one of our three authors.
  • A 2500 word paper using our three authors to illuminate your view of the transhumanist challenge.

Philosophy 4750: Philosophy and Public Policy

Overview

This is a course in meta-philosophy. It seeks to develop a philosophy adequate for the 21st century.

Academic philosophy has been captured by a set of categories (ancient, modern, contemporary; ethics, logic, metaphysics, epistemology) that are increasingly dysfunctional for contemporary life. Therefore, this is not merely a course on a specific subject matter (i.e., ‘public policy’) to be added to the rest. Rather, it seeks to question, and philosophize about, the entire knowledge enterprise as it exists today – and to philosophize about the role of philosophy in understanding and perhaps (re)directing the knowledge enterprise.

The course will cover the following themes:

  • The past, present, and future of the university in the Age of Google
  • The end of disciplinarity and the rise of accountability culture
  • The New Republic of Letters and the role of the humanist today
  • The failure of applied philosophy and the development of alternative models

Course Structure

This course is ‘live’: it reflects 20 years of my research on place of philosophy in contemporary society. As such, the course embodies a Humboldtian connection between teaching and research: I am not simply a teacher and a researcher; I’m a teacher-researcher who shares the insights I’m developing with students, testing my thinking in the classroom, and sharing my freshest thoughts. This breaks with the corporate model of education where the professor is an interchangeable cog, teaching the same materials that could be gotten at any university worldwide – while also opening me up to charges of self-indulgence.

Readings

  • Michael M. Crow and William B. Dabars, Designing the New American University
  • Crow chapter in HOI
  • Clark, Academic Charisma
  • Fuller, The Academic Caesar
  • Rudy, The Universities of Europe, 1100-1914
  • Fuller, Sociology of Intellectual Life
  • Smith, Philosophers 6 Types
  • Socrates Tenured: The Institutions of 21st Century Philosophy
  • Plato, The Republic, Book 1

Author Information: Lyudmila A. Markova, Russian Academy of Science, markova.lyudmila2013@yandex.ru

Shortlink: http://wp.me/p1Bfg0-3vE

Please refer to:

It is difficult to find a place for the concept of truth in social epistemology. Current philosophers disagree on the status “truth” and “objectivity” as the basis of thinking about science. Meanwhile, the very name ‘social epistemology’ speaks to a serious inevitable turn in our attitude toward scientific knowledge.  Once epistemology becomes social, scientific knowledge is oriented not to nature, but to human beings. Epistemology, then, addresses not the laws of nature, but the process of their production by a scientist. In classical epistemology we have, as a result of scientific research, laws regarding the material reality of the world created by us. Experimental results, obtained in classical science, must be objective and true, or they become useless.

In social epistemology, scientific results represent social communication among scientists (and not just among scientists), their ability to produce new knowledge, and their professionalism. In this case, knowledge helps us to create not a material artificial world, but a virtual world which is able to think. For such knowledge, notions like “truth” and “objectivity” do not play a serious role. Other concepts such as “dialog”, “communication”, “interaction”, “difference” and “diversity” come to the fore. In these concepts, we can see a turn in the development of epistemological thinking.

However, social epistemology does not destroy its predecessor. Let us remember this definition of social epistemology which Steve Fuller gives in 1988:

How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degree of access to one another’s activities?

It is not difficult to see that Fuller does not consider the aim of social epistemology as obtaining objective knowledge about the external world. He remains concerned about the diversity of social conditions in which scientists work. Changes in these conditions and features of an individual scientist such as professional competence, among others, should be taken into consideration.  Exactly these characteristics of thinking that come to the fore allow us to speak about a turn in the development of thinking. Now, the problems that exist in science and society require, for their solution, a new type of thinking. Still, we can find empirical reality the foundation both for classical (modern) and non-classical (based on social epistemology) logic.

Let us take an example. You bathe every day in the river Volga. You bathe today and you come to bathe tomorrow in the same river Volga. You cannot object that the river is still the Volga. Yet, at the same time, you see numerous changes from one day to the next—ripples appearing in, and new leaves appearing on, the water’s surface, the water temperature turning slightly colder and so on. It is possible to conclude that the river, after all, is not as it was yesterday. As Heraclitus famously observed: “You cannot enter the same river twice.”

Both conclusions are right. However, notions such as truth and objectivity did not lose their logical and historical significance; rather, they became marginal. Proponents of social epistemology should establish communication with classical logic and not try to destroy it.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-3uu

Editor’s Note: Steve Fuller’s “A Man for All Seasons, Including Ours: Thomas More as the Patron Saint of Social Media” originally appeared in ABC Religion and Ethics on 23 February 2017.

Please refer to:

Image credit: Carolien Coenen, via flickr

November 2016 marked the five hundredth anniversary of the publication of Utopia by Thomas More in Leuven through the efforts of his friend and fellow Humanist, Desiderius Erasmus.

More is primarily remembered today for this work, which sought to show how a better society might be built by learning from the experience of other societies.

It was published shortly before he entered into the service of King Henry VIII, who liked Utopia. And as the monarch notoriously struggled to assert England’s sovereignty over the Pope, More proved to be a critical supporter, eventually rising to the rank of “Lord Chancellor,” his legal advisor.

Nevertheless, within a few years More was condemned to death for refusing to acknowledge the King’s absolute authority over the Pope. According to the Oxford English Dictionary, More introduced “integrity”—in the sense of “moral integrity” or “personal integrity”—into English while awaiting execution. Specifically, he explained his refusal to sign the “Oath of Supremacy” of the King over the Pope by his desire to preserve the integrity of his reputation.

To today’s ears this justification sounds somewhat self-serving, as if More were mainly concerned with what others would think of him. However, More lived at least two centuries before the strong modern distinction between the public and the private person was in general use.

He was getting at something else, which is likely to be of increasing relevance in our “postmodern” world, which has thrown into doubt the very idea that we should think of personal identity as a matter of self-possession in the exclusionary sense which has animated the private-public distinction. It turns out that the pre-modern More is on the side of the postmodernists.

We tend to think of “modernization” as an irreversible process, and in some important respects it seems to be. Certainly our lives have come be organized around technology and its attendant virtues: power, efficiency, speed. However, some features of modernity—partly as an unintended consequence of its technological trajectory—appear to be reversible. One such feature is any strong sense of what is private and public—something to which any avid user of social media can intuitively testify.

More proves to be an interesting witness here because while he had much to say about conscience, he did not presume the privacy of conscience. On the contrary, he judged someone to be a person of “good conscience” if he or she listened to the advice of trusted friends, as he had taken Henry VIII to have been prior to his issuing the Oath of Supremacy. This is quite different from the existentially isolated conception of conscience that comes into play during the Protestant Reformation, on which subsequent secular appeals to conscience in the modern era have been based.

For More, conscience is a publicly accessible decision-making site, the goodness of which is to be judged in terms of whether the right principles have been applied in the right way in a particular case. The platform for this activity is an individual human being who—perhaps by dint of fate—happens to be hosting the decision. However, it is presumed that the same decision would have been reached, regardless of the hosting individual. Thus, it makes sense for the host to consult trusted friends, who could easily imagine themselves as the host.

What is lacking from More’s analysis of conscience is a sense of its creative and self-authorizing character, a vulgarized version of which features in the old Frank Sinatra standard, “My Way.” This is the sense of self-legislation which Kant defined as central to the autonomous person in the modern era. It is a legacy of Protestantism, which took much more seriously than Catholicism the idea that humans are created “in the image and likeness of God.” In effect, we are created to be creators, which is just another way of saying that we are unique among the creatures in possessing “free will.”

To be sure, whether our deeds make us worthy of this freedom is for God alone to decide. Our fellows may well approve of our actions but we—and they—may be judged otherwise in light of God’s moral bookkeeping. The modern secular mind has inherited from this Protestant sensibility an anxiety—a “fear and trembling,” to recall Kierkegaard’s echo of St. Paul—about our fate once we are dead. This sense of anxiety is entirely lacking in More, who accepts his death serenely even though he has no greater insight into what lies in store for him than the Protestant Reformers or secular moderns.

Understanding the nature of More’s serenity provides a guide for coming to terms with the emerging postmodern sense of integrity in our data-intensive, computer-mediated world. More’s personal identity was strongly if not exclusively tied to his public persona—the totality of decisions and actions that he took in the presence of others, often in consultation with them. In effect, he engaged throughout his life in what we might call a “critical crowdsourcing” of his identity. The track record of this activity amounts to his reputation, which remains in open view even after his death.

The ancient Greeks and Romans would have grasped part of More’s modus operandi, which they would understand in terms of “fame” and “honour.” However, the ancients were concerned with how others would speak about them in the future, ideally to magnify their fame and honour to mythic proportions. They were not scrupulous about documenting their acts in the sense that More and we are. On the contrary, the ancients hoped that a sufficient number of word-of-mouth iterations over time might serve to launder their acts of whatever unsavoury character that they may have originally had.

In contrast, More was interested in people knowing exactly what he decided on various occasions. On that basis they could pass judgement on his life, thereby—so he believed—vindicating his reputation. His “integrity” thus lay in his life being an open book that could be read by anyone as displaying some common narrative threads that add up to a conscientious person. This orientation accounts for the frequency with which More and his friends, especially Erasmus, testified to More’s standing as a man of good conscience in whatever he happened to say or do. They contributed to his desire to live “on the record.”

More’s sense of integrity survives on Facebook pages or Twitter feeds, whenever the account holders are sufficiently dedicated to constructing a coherent image of themselves, notwithstanding the intensity of their interaction with others. In this context, “privacy” is something quite different from how it has been understood in modernity. Moderns cherish privacy as an absolute right to refrain from declaration in order to protect their sphere of personal freedom, access to which no one— other than God, should he exist—is entitled. For their part, postmoderns interpret privacy more modestly as friendly counsel aimed at discouraging potentially self-harming declarations. This was also More’s world.

More believed that however God settled his fate, it would be based on his public track record. Unlike the Protestant Reformers, he also believed that this track record could be judged equally by humans and by God. Indeed, this is what made More a Humanist, notwithstanding his loyalty to the Pope unto death.

Yet More’s stance proved to be theologically controversial for four centuries, until the Catholic Church finally made him the patron saint of politicians in 1935. Perhaps More’s spiritual patronage should be extended to cover social media users.

Justin Cruickshank at the University of Birmingham was kind enough to alert me to Steve Fuller’s talk “Transhumanism and the Future of Capitalism”—held by The Philosophy of Technology Research Group—on 11 January 2017.

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-3nx

Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

Please refer to:

fox_in_snow

Image credit: Der Robert, via flickr

The Oxford Dictionary made ‘post-truth’ word of the year for 2016. Here is the definition, including examples of usage:

Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief:

‘in this era of post-truth politics, it’s easy to cherry-pick data and come to whatever conclusion you desire’

‘some commentators have observed that we are living in a post-truth age’

In STS terms, this definition is clearly ‘asymmetrical’ because it is pejorative, not neutral. It is a post-truth definition of ‘post-truth’. It is how those dominant in the epistemic power game want their opponents to be seen. In my recent symmetrical exposition of ‘post-truth’ for the Guardian, I suggested that the Oxford Dictionary’s definition speaks the lion’s truth, which tries to create as much moral and epistemic distance as possible from whatever facsimile of the truth the fox might be peddling. Thus, the fox—but not the lion—is portrayed as distorting the facts and appealing to emotion. Yet, the lion’s truth appears to the fox as simplistically straightforward and heavy-handed, often delivered in a fit of righteous indignation. Indeed, this classic portrayal of the lion/fox divide may better apply to the history of science than the history of politics.

For better or worse, STS recoiled from the post-truth worldview in 2004, when Bruno Latour famously waved the white flag in the Science Wars, which had been raging for nearly fifteen years—starting with the post-Cold War reassessment of public funding for science. Latour’s terms of surrender were telling. After all, he was the one who extended the symmetry principle from the Edinburgh School’s treatment of all human factors—regardless of whether we now deem them to have been ‘good’ and ‘bad’—to include all non-human factors as well. However, Latour hadn’t anticipated that symmetry applied not only to the range of objects studied but also the range of agents studying them.

Somewhat naively, Latour seemed to think that a universalization of the symmetry principle would make STS the central node in a universal network of those studying ‘technoscience’. Instead, everyone started to apply the symmetry principle for themselves, which led to rather cross-cutting networks and unexpected effects, especially once the principle started to be wielded by creationists, climate sceptics and other candidates for an epistemic ‘basket of deplorables’. And by turning symmetry to their advantages, the deplorables got results, at least insofar as the balance of power has gradually tilted more in their favour—again, for better or worse.

My own view has always been that a post-truth world is the inevitable outcome of greater epistemic democracy. In other words, once the instruments of knowledge production are made generally available—and they have been shown to work—they will end up working for anyone with access to them. This in turn will remove the relatively esoteric and hierarchical basis on which knowledge has traditionally acted as a force for stability and often domination. The locus classicus is the Republic, in which Plato promotes what in the Middle Ages was called a ‘double truth’ doctrine – one for the elites (which allows them to rule) and one for the masses (which allows them to be ruled).

Of course, the cost of making the post-truth character of knowledge so visible is that it also exposes a power dynamics that may become more intense and ultimately destructive of the social order. This was certainly Plato’s take on democracy’s endgame. In the early modern period, this first became apparent with the Wars of Religion that almost immediately broke out in Europe once the Bible was made readily available. (Francis Bacon and others saw in the scientific method a means to contain any such future conflict by establishing a new epistemic mode of domination.) While it is possible to defer democracy by trying to deflect attention from the naked power dynamics, as Latour does, with fancy metaphysical diversions and occasional outbursts in high dudgeon, those are leonine tactics that only serve to repress STS’s foxy roots. In 2017, we should finally embrace our responsibility for the post-truth world and call forth our vulpine spirit to do something unexpectedly creative with it.

The hidden truth of Aude sapere (Kant’s ‘Dare to know’) is Audet adipiscitur (Thucydides’ ‘Whoever dares, wins’).