Archives For

Author Information: Robert Frodeman, University of North Texas,

Frodeman, Robert. “Socratics and Anti-Socratics: The Status of Expertise.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 42-44.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: J.D. Falk, via flickr

Do we, academically trained and credentialed philosophers, understand what philosophy is? It’s a disquieting question, or would be, if it could be taken seriously. But who can take it seriously? Academic philosophers are the inheritors of more than 100 years of painstaking, peer-reviewed work—to say nothing of centuries of thinking before that. Through these efforts, philosophy has become an area of expertise on a par with other disciplines. The question, then, is silly—or insulting: of course philosophers know their stuff!

But shouldn’t we feel a bit uneasy by this juxtaposition of ‘philosophers’ and ‘know’? We tell our introductory classes that ‘philosopher’ literally means to be a friend or lover of wisdom, rather than to be the actual possessor of it. And that Socrates, the patron saint of philosophy, only claimed to possess ‘Socratic wisdom’—he only knew that he knew nothing. Have we then abandoned our allegiance to Socrates? Or did we never take him seriously? Would philosophers be more candid quoting Hegel, when he noted in the Preface to the Phenomenology of Spirit that his goal was to “lay aside the title ‘love of knowing’ and be actual knowing”? But wouldn’t that mean that philosophers were not really philosophers, but rather sophists?

Two Types of Sophists

The Greeks knew two types of sophists. There were the philosophical sophists, who had skeptical beliefs about the possibilities of knowledge. Protagoras, the most famous of these, claimed that experience is inescapably subjective: the same wind blows both hot and cold depending on the person’s experience. But also, and more simply, sophists were people in the know, or as we say today, experts: people able to instruct young men in skills such as horsemanship, warfare, or public speaking. There are some philosophers today who would place themselves into the first category—for instance, standpoint epistemologists, who sometimes make similar claims in terms of race, class, and gender—but it seems that nearly all philosophers place themselves in the latter category. Philosophers today are experts. Not in philosophy overall, of course, that’s too large of a domain; but in one or another subfield, ethics or logic or the philosophy of language.

It is the subdividing of philosophy that allows philosophers to make claims of expertise. This point was brought home recently in the dustup surrounding Rebecca Tuvel’s Hypatia article “In Defense of Transracialism.” Tuvel’s piece prompted the creation of an Open Letter, which collected more than 800 signatories by the time it was closed. The Letter called on Hypatia to retract publication of her essay. These critics did not merely disagree with her argument; they denied her right to speak on the topic at all. The Letter notes that Tuvel “fails to seek out and sufficiently engage with scholarly work by those who are most vulnerable to the intersection of racial and gender oppressions….”

Tuvel’s article and subsequent publishing of the Open Letter have elicited an extended series of commentaries (including no less than two op-eds in the New York Times). The exact criteria for those who wished to censure Tuvel has varied. Some thought her transgression consisted in the insufficient citing of the literature in the field, while others claimed that her identity was not sufficiently grounded in personal experience of racial and/or gender oppression. In both cases, however, criticism turned on assumptions of expertise. Notably, Tuvel also makes claims of expertise, on her departmental website, as being a specialist in both feminism and the philosophy of race, although she has mostly stayed out of the subsequent back and forth.

My concern, then, is not with pros and cons of Tuvel’s essay. It is rather with the background assumption of expertise that all parties seem to share. I admit that I am not an expert in these areas; but my claim is more fundamental than that. I do not view myself as an expert in any area of philosophy, at least as the term is now used. I have been introduced on occasion as an expert in the philosophy of interdisciplinarity, but this usually prompts me to note that I am only an expert in the impossibility of expertise. Widespread claims to the contrary, interdisciplinarity is perhaps the last thing that someone can be an expert in. At least, the claim cannot be that someone knows the literature of the subject, since the point of interdisciplinarity, if it is something more than another route to academic success, is more political than epistemic in nature.

A Change in Philosophy?

The attitudes revealed by L’Affaire Tuvel (and examples can be multiplied at will[1]) suggests that we are looking at something more than simply another shift in the philosophical tides. There has always been a Hegelian or Cartesian element within philosophy, where philosophers have made claims of possessing apodictic knowledge. There has also always been a Socratic (or to pick a more recent example, Heideggerian) cohort who have emphasized the interrogative nature of philosophy. Heidegger constantly stresses the need to live within the question, whether the question concerns being or technology. He notes as well that his answers, such as they are, are true only approximately and for the most part—zunächst und zumeist. In this he follows Aristotle, who in the Ethics 1.3 pointed out that some areas of inquiry are simply not susceptible to either precision or certainty of knowledge. To my mind, this is the condition of philosophy.

Grant, then, that there have always been two camps on the possibility of expertise in philosophy. But I suggest that the balance between these two positions has shifted, as philosophy has become a creature of the university. The modern research university has its own institutional philosophy: it treats all knowledge democratically, as consisting of regional domains on a common plane. There is no hierarchy of the disciplines, no higher or lower knowledge, no more general or specific knowledge. Researchers in philosophy and the humanities see themselves as fellow specialists, rather than as intellectuals of a markedly different type than those in the natural and social sciences.

Today these assumptions are so deeply embedded that no one bothers to note them at all. Few seriously propose that philosophers might have a role to play other than being an expert, or that our job might be to provoke rather than to answer. I, however, want to raise that very possibility. And operating under the assumption that naming the two positions might help rally troops to their respective standards, let the two camps be designated as the Socratics and the Anti-Socratics.

Part of the attraction that Science and Technology Studies (STS) has held for me has been its undisciplined nature, and the faint hope that it could take over the Socratic role that philosophy has largely abandoned. Of course, the debate between the Socratics and Anti-Socratics rages in STS as well, framed in terms of Low and High Church STS, those who resist STS becoming a discipline and those who see it as part of the necessary maturation of the field. I admit to feeling the attractions of High Church STS, and philosophy: expertise has its prerogatives, chief among them the security of speaking to other ‘experts’ rather than taking on the dangerous task of working in the wider world. But I think I will throw my lot in with the Socratics for a while longer.


Aristotle. The Nichomachean Ethics.  Oxford University Press, 2009.

Brubakermay, Rogers. “The Uproar Over ‘Transracialism’.”  New York Times. May 18, 2017.

Frodeman, Robert and Adam Briggle.Socrates Tenured: The Institutions of 21st-Century. London: Rowman & Littlefield, 2016.

Fuller, Steve and James H. Collier. Philosophy, Rhetoric, and the End of Knowledge: A New Beginning for Science and Technology Studies. Mahwah, NJ: Lawrence Erlbaum, 2004.

Hegel, Georg Wilhelm Friedrich. Hegel’s Preface to the “Phenomenology of Spirit”. Translated by Yirmiyahu Yovel. Princeton University Press, 2005.

Schuessler, Jennifer. “A Defense of ‘Transracial’ Identity Roils Philosophy World.” New York Times. May 19, 2017.

Tuvel, Rebecca. “In Defense of Transracialism.” Hypatia 29 March 2017. doi: 10.1111/hypa.12327

[1] See, for instance,

Author Information: Gregory Sandstrom, Independent Researcher, Ottawa Blockchain Group,

Sandstrom, Gregory. “Who Would Live in a Blockchain Society? The Rise of Cryptographically-Enabled Ledger Communities.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 27-41.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: Shutterstock

It has math. It has its computer science. It has its cryptography. It has its economics. It has its political and social philosophy. It was this community that I was immediately drawn into.—Vitalik Buterin, Founder of Ethereum (Zug, Toronto, Moscow)

That was a memorable day for me, for it made great changes in me. But it is the same with any life. Imagine one selected day struck out of it, and think how different its course would have been. Pause you who read this, and think for a moment of the long chain of iron or gold, of thorns or flowers, that would never have bound you, but for the formation of the first link on one memorable day. —Charles Dickens, Great Expectations, 1861

Section 1: Introduction to Blockchainn

In Blockchain Sociology, you are the sociologist. Even if you didn’t take a ‘degree’ in sociology or perhaps even ever take a course in it, you are a sociologist at least in so far as you are part of society, and indeed, of several or many societies. Being part of different societies, you observe them, hold views and opinions about them, collect data about them, shed your own information and actions (recorded by some empirical data collection devices) about them and to them and generally participate in them. In so far as you are that kind of sociologist, you are qualified to be reading this paper, which focuses pop-sociologically on blockchain technology[1] (BC tech) and its potentially major coming impact on societies near and around you and in which you are already a member without needing any sociologist to validate that fact.

This is an exploratory paper on BC that will need to be updated as BC tech develops. BC is an unknown phenomenon still for many people; partly fantasy and partly a soon-to-be new reality. In short, a ‘blockchain’ is simply a chronologically arranged online (internet-based) digital chain of ‘blocks’. [2] A ‘blockchain society’ is thus a kind of semi-fictional representation that harks potentially <15 years into the future, when BC tech will be widely integrated globally and locally into human-social life. In this near-future, I imagine a scenario where ‘machines’ or cybernetic organisms (read: killer robots) haven’t taken over people or led to ‘post-humanity’ (Terminator, Bostrom, Kurzweil, et al.) or ‘trans-humanity’ (Fuller). Rather ‘social machines’ (e.g. BC tech) are used to aid specifically human (homo sapiens sapiens) development.

It is already expected that people will make more and more frequent use of ‘social machines’ (Berners-Lee 1999) where the administration (automation and calculation, etc.) is done by computers, while the creative work and relationships are driven by (still-human) people.[3] This was the vision of one of the main persons who ‘invented’ the internet (protocols) and helped organise and encourage people globally to build it and use it. We thus look in this paper to the impact that this new BC tech can have on societies and their economies through development based on the rise of social machines. Part I focuses on BC basics, as a general introduction to the topic. Part II will focus more on the implications of living in BC societies and how our economies can prepare for the massive restructuring that the technology promises to deliver across a range of education, business, media, governmental and non-governmental sectors.

Figure 1. Humanising the Digital (source unknown)

Section 2: Blockchain Basics

Here we take an early look at what BC societies will become and focus in particular on ‘ledger communities’ (LCs). These LCs establish the basis on which transaction infrastructures may be built using algorithm-guided transaction protocols. Technical details are left out of this paper. It is speculative and probing in character; a fertile ground for ‘social epistemologists’ to explore beyond mere academic philosophy.

The paper underemphasises the cryptography involved, does not even touch on the important asymmetric cryptographic features of BC and simply assumes[4] for the time being that the cryptography enables various levels of anonymity that will allow new LCs to form with at least a basic level of identity safety. Between the lines I hint at concerns coming from a sociologist about what is almost inevitably to happen to societies due to BC tech, while reflecting on current global power structures, repeated social experimental failures of the past and the complex human space-time scales that may eventually be involved with developing BC tech. So it is a half-academic, half-public understanding approach to BC that is prompted by simply being fascinated by BC tech and what it is going to do to the human world as we now know it.

BC tech brings with it an entirely new set of preconditions and possibilities for participation and membership in societies and communities. The technology has immediate social impact when it is implemented because it is pragmatically based on ‘transactions’ that can be chronologically organised, structured and recorded that deal with buying, selling, trading or sharing of assets and values.[5] The value placed on any and all of these digital transactions serves quite ‘naturally’ to develop into a ‘virtual economy’ (Castronova 2006) on local and/or global scales. BC tech thus offers a new kind of societal pattern to which schools, universities, companies, associations, organisations, government ministries, non-government and other community groups may participate, willingly for the betterment of the broader niches within societies that they represent.

Will your future BCs and mine be made “of iron or gold, of thorns or flowers,” in all of the new varieties of human relationships and ‘societies’ that will develop along with them? Let us not be fearful of the unknown, but rather prepare to meet it with both our minds and hearts working ahead of time. In short, this paper with McLuhanite focus is inspired by what the ‘effects’ of BC tech will be when applied across a wide range of social and economic phenomena.[6]

Figure 2. Humanity in the Blockchain (source unknown)

Section 3: Whose Ledgers Do You Share In Today? Whose Will You Share In Tomorrow?

Let us assume that for many people this is an early reading on the topic of BC. Those already familiar with BC tech can skip to section 5. The following definition of BC by Professor Jeremy Clark offers a good start: BC is “a place [ledger] for storing data that is maintained by a network of nodes without anyone in charge.”[7] As can be seen immediately, the significant social implications[8] involve who will be in charge of what and who must ‘take orders’ from those in charge in any given LC. Is order-giving all digitally automated and if so, what might this mean for training in leadership and followership?

On the business side and in the science-technology world of ‘visioneering’ (Cabrera, Davis and Orozco, 2016) ideas, we can easily find a wide variety of definitions of BC,[9] some who call it a “second generation of the internet,”[10] question if it is “the most important IT invention of our age,” hype it into a ‘single source of shared truth,’ or suggest: “We can always trust the blockchain.”[11] Such statements are bound to bring a sound of alarm to sociologists’ attention. In this case economic sociologists are in urgent need for analysis and policy considerations as we prepare to live in BC societies and economies.[12]

It was a few weeks ago (April 2017) during a call with a sociologist friend, when I shared with her my view on the importance of BC tech for sociology. I said that BC is likely to bring about the single biggest revolution in the history of the field. It was as astonishing to have those words come out of my mouth as it was for her to hear them.[13] Nevertheless, I stand by this assertion even without daring many predictions about what this incoming ‘neo-sociology’ will be like. Simply put, BCs are going to significantly impact the way people almost everywhere on Earth live, act and behave in so-called ‘smart communities,’ and thus also how scholars and scientists are able to study the societies and economies in which we live.

Let us turn away from any type of promotional hype to the qualified reflexivity of the academic tongue, which according to Clark, reminds us that “[b]lockchains themselves aren’t a game changer.” Likewise, as HyperLedger executive director Brian Behlendorf cautions, “[t]here are over-inflating expectations [regarding BC] right now,” though along with others[14] he does view it as potentially ‘game-changing’.[15] So what is the promise of BC and BC tech and how it can be applied to people and societies and used globally?

Figure 3. Varieties of Ledgers (source unknown)

Section 4: Distributed Ledger for Consensus

First, let’s start with what we know fairly clearly about BC tech.[16] BC tech makes use of a digital ‘distributed ledger’ (DL) system. This is a collective (communal) bookkeeping or accounting system recording and copying a history of transaction events in each BC. The specific kinds of transaction that may be suitable for such distributed ledgers in on-line and mobile communities are still open to much discussion and debate.[17] The data from BC transactions is codified for application in the various algorithms run by the social machine and cryptographised to enable various levels of user anonymity and thus greater freedom of participation. Nevertheless, issues of transparency and access to any given LC’s recorded history and membership are still unresolved and will inevitably continue to challenge the conversations arising over BC tech. In short, BC tech offers a ‘cryptographically secured DL’[18] that provides people in voluntary LCs a new way of engaging in value-oriented relationships with the aim of making more efficient and equitable exchanges of value.

The DL serves as a kind of organised but decentralised electronic data storage which is arranged as an informational bulletin board accessible to all members of the LC. Users of a BC tech service volunteer to join a LC, wherein all participants share ‘one book’ with distributed access across the system. This is intended to create a kind of ‘immutable’ social history that cannot be corrupted by after-the-fact editing of events because it always leaves a noticeable trail that can be tracked to source of the corruption (see image below for why one can’t cheat BC tech). The DL system thus aims to provide a so-called ‘golden record’ of ‘end-to-end’ (E2E) verifiability, wherein validation of transactions is accepted as completed and irreversible by all members of a LC.

In common parlance, if you don’t want everyone to know everything about you and your possessions; but you want the people that you choose to know enough, in confidence, and to know if they are willing to share, trade or sell ‘in-kind’ knowledge, experience, ideas, votes or ratings with you, then BC tech will be made to deliver this. The technology is meant to lead you to and to facilitate transactions based on shared values and aims in a LC. Nevertheless, you must have the creative personal interest and volunteer to join that LC in the first place.

The LC’s transactions (e.g. purchase, sale, trade, bid, negotiation, auction, vote, authorisation, recommendation, special pass, etc.) are completed and verified by the participant themselves and by any other participants in the transaction. A transaction is completed via a ‘digital signature’ that each person receives with their membership registration and uses to verify their participation in any and all transactions. Participants in a LC receive ‘signing keys’ for transactions, which includes both public and private verification options. The history of these total transactions in a LC comprises a transactional database that confirms the values shared and distributed among participants.

The BC tech uses a time stamp validation that records the time, date and participants in all transactions in the LC. This enables participants in a LC to move various types of digital value (synthetic assets) using a peer-to-peer (P2P) network. The architecture of the P2P network is built around ‘consensus algorithms’ that facilitate the transactions in a LC linked to whatever ‘real life’ institutions, assets and values are involved, including all of the associated services already used by participants in a de-centralised network. The ‘service centre’ in such a network becomes an oxymoron, while service agents will still populate any LC with good service to its members in mind.

The notion that a ‘consensus’ can be reached in a community based on algorithms (i.e. mechanistically) is one of the most contentious issues in a BC society and also one of the most fascinating as it could lead to many new social and economic (niche) communities and configurations. If one doesn’t agree to the rules and regulations of a particular LC, then they simply will not (and even cannot!) join it and thus cannot be forced into accepting any consensus from that community, at least in principle. Instead, competing BCs may temporarily arise based on different sets of rules and regulations to which individuals and groups of people will be able to join and want to join, indeed, will feel compelled to join because many they know will join for mutual benefit. Thus, the spectre of globally widespread BCs must eventually gain the attention of any serious BC sociologist.

Section 5: Sustainable Development, Markets, Morals & Distributions

The largest area of application for BCs is what has become known as ‘sustainable development.’ BCs serve sustainable development goals by providing an opportunity to automate proportional distribution of value contribution (i.e. human, natural, financial, cultural and other ‘resources’) to ‘projects’ built upon transactions that are guided and thus in a sense ‘morally’ conditioned based on the well-being of the LC. Any ‘asset’ or with ‘value’ can be bought, sold, traded or shared publically counts as the domain of BC tech to tackle with higher efficiency, justice and equitability in transactions than pre-BC systems. BC tech thus redefines what a ‘market economy’ means with higher sustainability and improved proportionality at its core because of its new system for measuring, owning and allotting value in social economics, in a way that value can be redistributed quickly within a system. As one of the primary ideologies governing public and private policy now in Canada (where the author is writing from) and also globally is that of ‘sustainable development’ (contrast: ‘millennial evolution’) it has become engrained in the notion of BC sociology and economics from the start.

Thus BC tech in one sense offers a turn of focus away from N. Machiavelli’s view of the ‘state,’ or what we now call ‘nation-state’ toward a new kind of community attitude or ‘social epistemology,’ specifically voiced within a LC, which differs radically from the outdated notions of ‘communism’ from the 20th century that dwindle though continue to the current day. On the level of political economy, BC provides an alternative notion to Machiavellian (western autocratic) individualism by definition in enabling a more ‘trusting’ attitude in communal or group situations and that highlight value transactions in those communities. Thus the new level of BC ‘community morality’ will become a kind of boundary wall for inclusion in or exclusion from a LC, wherein by the will of the each BCs voluntary rules the policy arises simply that ‘dictators are not allowed.’ This move pushes actively into addressing a sociological void against strong-arm ‘hard power’ tactics in negotiations and politics, without necessarily diminishing the creative freedom and moral credibility of active individuals in a LC.

When a BC is built, established and fuelled with members it then must guarantee at least a minimal level of anonymity (as volunteered when joining the LC and agreeing to the rules and regulations of the Genesis Block – Part II) to enable continued participation. The flip-side is that BC requires maximum transparency according to the preferences and automations that people choose in their respective LCs. Thus, the architecture of any BC and the rules and regulations hard coded into the Genesis Block are of crucial significance to the success or failure of any LC. Likewise, if the moderation and service provision for a LC is not maintained appropriately, then it will not generate ‘stick’ or lead to growth, but rather lack of commitment and decline in usage. Thus in part we can answer the questions stated above in Section 3.

Section 6: Blockchain Applications

I hesitate to open this topic much because it is currently filled with both speculation and also practical experimentation. We are witnessing the emergence of BC tech on a global scale and the applications grow on an almost daily basis, so I see little point in categorising them now. The Blockchain Research Institute in Toronto is currently gathering the largest index of BC ‘use cases’ scheduled for completion by autumn 2018. The Delft University of Technology has a BC laboratory,[19] as do many other major universities. Governments have been experimenting with and discussing BC tech as well as a variety of new tech identifiers in business and finance (see references).

FinTech has largely focussed on developing cybercurrencies (Bitcoin, Ripple, etc.), for a variety of practical reasons, e.g. to limit ‘double-spending’ or more closely monitor debt repayment cases. Another aim of some proponents of BC in FinTech comes from the effort on behalf of citizen-consumers to eliminate overseers in financial transactions and thus to create a new kind of money ‘distribution.’ With new companies like Revolut and PayPal already working in the transfer of funds, BC tech looks to expand the reach of automated financial transactions throughout society as social machines start to replace unneeded human interventions in the system. Examples areas where work is already being done to integrate BCs include private stock trading, letters of credit, crowd funding, interbank loans, grants and many more.

The extremely high social value of ‘smart contracts’ (Szabo, Ethereum, more below) will combine with increased exposure to public voting in governance, social organisation and sustainable development. This may lead to more transparent government accountability and a fundamentally different way of determining public service election results and sustainable development policy implementation. Some optimism has been expressed that BC tech will lead to reduced voter fraud, yet the solutions posed also come along with increased mass hacking concerns. The BC feature of multi-party live or delayed computation enables real-time voting updates on bulletin boards [20] and also tentative voting (taking the social temperature first), which allows a person to change their vote before an election (e.g. based on expected or adjusted chance-to-win). Little more needs to be said to provoke interest and controversy; once BC tech and democracy are mentioned in the same sentence, fireworks often ensure.

BC tech has so many potential impacts in areas such as bargaining, auctions and estates, vehicle safety records, legal cost-sharing, real estate, mortgages, securities, lotteries, etc., that it is somewhat daunting to conceive how all of this is going to come about slowly, let alone within the next decade as some proponents are suggesting. BC tech can thus be seen in its early formulations like mathematical Nash equilibrium on humanistic socio-economic steroids.

Figure 4. Traditional and Blockchain Networks (source unknown)

Section 7: Why? So What? Who Cares?

The reorganisation of society that this technology has the potential to enable and indeed, in some ways to require, is simply beyond significant; BC is the massive shift that people have been waiting for since the internet came and even before it. The wide-ranging implications[21] of BC tech should not be discussed, however, without considerable caution to how it will influence the notion of ‘neo-liberal democracy’ as the reigning dominant ideology in the ‘western world.’ Even talk of ‘innovating capitalism’ (Bheemaiah 2017) is bound to raise some peoples’ ire. Especially leftist-leaning political thinkers ought to have started drooling as soon as they heard the word ‘distributed’ because there is a quick path to ‘redistribution’ wherever it substitutes for ‘competition’ in the social economics literature. Yet rightist-leaning political thinkers have equally as much to drool about as it empowers people to ‘become their own currency’ and in that sense to ‘live entrepreneurially’ through voluntary participation in LCs.

When one ramps up their concern with the political economy of BC societies, they realise eventually that by using cryptography, the new cryptocurrencies undermine the future possibility for governments to control the money supply of their citizens. By creating an alternative to fiat money, cryptocurrencies may change the world by themselves, even without BC tech. Yet it is the BC tech that facilitates the ‘virtual economies’ to grow in a sustainable way. BC tech thus makes it possible for cryptocurrencies to arrive and flourish as actual, widely used currencies, which is what LCs are seeking in the use of their own community-based cryptocurrency. This feature alone suggests the possibility of changing power structure in societies away from central banks and financiers towards decentralised voluntary communities of value. The sociological implications of this are not surprisingly incredibly difficult to predict and indeed requires some kind of ‘new sociological imagination’ (Fuller 2006) beyond what is currently available.

There seems to be still something significant missing in the BC ecosystem, however, rather than something crucial broken that can be fixed. Optimism in the BC ecosystem now rides on a big wave and people want to know more about how to get involved in BC tech as builders, facilitators, investors, coders, surveyors, etc. What has been missing so far in most BC journalism and even most of the academic/scholarly contributions is broader exploration into the application and relevance of BC tech to societies, their laws, economies and cultures. We may nevertheless start to investigate the growth of BC societies by tracing the rise of codified transaction infrastructures that thus create a new ‘web’ of LCs.

The proliferation of LCs will create the first example of societies based widely on ‘smart contracts,’ which have become a symbol of justice-seeking, anti-exploitative, democratic economic transactions that are automatically enforced by LC rules and regulations. Already in the mid-1990s, Nick Szabo defined ‘smart contracts’ as, “a computerized transaction protocol that executes the terms of a contract” (1994; see Tapscott and Tapscott, 2016). He calls a smart contract “a set of promises, specified in digital form, including protocols within which the parties perform on these promises.” (Szabo 1996) One of the applications of such contracts is to establish an ownership history of an asset so that potential new buyers of that asset can see its production genesis and value history. Another is to assist in supply chain management in order to improve efficiency and traceability and thus to reduce delays and errors. By distributing data from voluntary transactions across the system the social machines improves the efficiency of the human-driven system in a non-centralised way.

The over-inflated view of a need for constant decentralisation is often linked to promises of lower costs for ‘consumers’ due to the elimination of many artificial and unnecessary 3rd party (middleman) fees. This includes the siphoning of company funds into private hands, sometimes illegally, at the cost of the industry or community, which will be aimed for elimination in LCs. Instead of ‘middlemen,’ BC tech necessitates a whole lot more ‘middle people,’ in the sense of ‘mediators’ and ‘introducers’ who are not necessarily salespersons. The position of a professional ‘BC introducer’ seems to be a functional economic role in the era of BC tech and LCs, while a % of transaction costs facilitated by the LC will determine the salaries and risks of its new financial facilitators.

BC tech will also soon be brought to the forefront of global news when Palantir’s work on ‘Investigative Case Management’ (aka. surveillance) for the USA’s Homeland Security in the United States has reached the completed first stage in autumn 2017. When that happens, a lot of people are going to start caring about BC as it is being applied on a hot-button political issue in the USA. Technology entrepreneur Peter Thiel has been integrating BC into current surveillance methods and practises, which promises to fundamentally reshape the USA’s immigration and deportation programs. (More on this to come in Blockchain Sociology: Part II.)

Section 8: Preliminary Conclusion and Invitation

Why would a sociologist or social epistemologist take interest in BC tech? The answer in short is because those fields need experiments to validate their often highly theoretical and abstract ideas, i.e. to put them to the test. Digitally recorded transactions in a ‘blockchain’ can provide a broader domain for research and experimentation than anything that has been offered in the history of those fields due to the current power of computing. It is a basic question of which sociologists and social epistemologists are going to be early adopters of the technology and which will be laggards, there really isn’t any question of ‘if’ anymore with regard to eventual adoption in this case, it is simply a question of ‘when’.

We see such a shift already with the massive growth of the video game industry, that the habits and choices of players can be studied throughout the course of their time ‘plugged in.’ The same will be the case with LCs because people will be attracted to participate in them as they feed already present interests in the participants and thus provide an incentive to engage with like-minded or similarly inclined people. We thus have a potentially prolific new technological resource, the early phase of a ‘social machine’ that we can use for potential research and development in the BC idea. Our challenge in SSH is how to humanise this technology, so that we would not lose from humanity more than we gain by it (Postman’s dictum).

A major feature of BC tech is trust; if you feel you can trust a community (on-line or off-line), that it holds your best interests and personhood in mind on any given topic or activity, then it tends to be easier to ‘deal’ with them, including engaging with others in transactions of value. BC tech enables this in a new quasi-manufactured way through pre-agreed smart contracts that provide higher accountability, transparency and anonymity. Based on voluntary value transactions that are automated using agreement algorithms, BC tech has been suggested as a way to produce ‘upfront compliancy,’ which can protect parties from various risks involving other members in the LC. These include agreement violations wherein BC tech can pinpoint the source of a broken contract or shady deal from within a chain of otherwise difficult to trace transactions. In short, in agreeing to join a LC, one agrees to abide by the rules, which must come with in-house punishment for (attempted) violators or offenders; this is the cost of automated convenience that serves our decision-making capacities.

Figure 5. Why You Can’t Cheat at Bitcoin (source unknown)

Already with the new US government regime we are seeing a debate over the ‘unmasking’ of citizens by politicians and national or international ‘intelligence’ agencies. And soon we will witness a major overhaul in migration and policing matters related to immigrants with the help of BC tech. We cannot therefore endorse BC as simply roses without thorns, which is a large factor that figures into writing this short paper on BC tech and sociology. While anonymising citizens for their own protection could benefit some citizens who feel isolated, marginalised or disempowered in the current social systems they live in, it could also potentially become a weapon of control over members within LCs or over entire LCs themselves, if the wrong Genesis Block framework is patterned into the coding. It could thus lead to dehumanising people who for whatever variety of reasons aren’t able to choose their own preferred pattern of anonymity due to internal or external LC pressures, and who thus become ‘orphans’ in the new BC society.

Is BC like what tech entrepreneur and BC proponent Alex Tapscott suggests, “something that could change basically every industry in the world”?[22] The government of Canada’s Department of Innovation, Science and Economic Development funded the research for the Tapscott’s recent 2017 BC Corridor report. They are currently raising the flag for major transformation in the short-term based on BC tech and are moving to back-up and stabilise this claim with research and development projects, calling for research into use cases and the drawing up of white papers. This leads me to believe BC is on the cutting-edge of public-private partnerships, flexibly scalable networks and sustainable developments as we learn about the impact of BC tech on society, economics and culture as the 21st century moves forward.


Berners-Lee, Tim with Mark Fischetti. Weaving the Web. New York: HarperCollins, 1999.

Bheemaiah, Kariappa. The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory. Paris: Apress, 2017.

Blockchain Research Institute. “Blockchain Transformations.” The Tapscott Group, 2017.

Buterin, Vitalik. “Ethereum. White Paper: A Next-Generation Smart Contract and Decentralized Application Platform.” Ethereum, 2014.

Cabrera, Laura, William Davis, Melissa Orozco. “Visioneering Our Future.” In The Future of Social Epistemology: A Collective Vision, edited by James Collier, 199-218. London: Rowman & Littlefield, 2016.

Castronova, Edward “Synthetic Economies and the Social Question.” First Monday, Special Issue no. 7, 2006.

Castronova, Edward. “Virtual Worlds: A FirstHand Account of Market and Society on the Cyberian Frontier.” In The Game Design Reader: A Rules of Play Anthology, edited by  Katie Salen Tekinbaş and Eric Zimmerman, 814–863. Cambridge, MA: MIT Press, 2006a.

Castronova, Edward. “On Virtual Economies.” Game Studies 3, no. 2 (2003).

Clark, Jeremy. “Blockchain Based Voting: Potential and Limitations.” MIT talk, 2016.

De Filippi, Primavera. “What Blockchain Means for the Sharing Economy.” 2017.

del Castillo, Michael. “The IMF Just Finished its First ‘High Level’ Meeting on Blockchain.” CoinDesk, April 19, 2017.

Fuller, Steve. The New Sociological Imagination. London: Sage, 2006.

Gates Foundation. “Level One Project: Designing a New System for Financial Inclusion.” 2017.

Iansiti, Marco and Karim R, Lakhani. “The Truth about Blockchain.” Harvard Business Review, January–February (2017): 118–127.

Knight, Will “The Technology Behind Bitcoin Is Shaking Up Much More Than Money.” MIT Technology Review, 2017.

Koven, Jackie Burns “Block the Vote: Could Blockchain Technology Cybersecure Elections?” 2016.

Lehdonvirta, Vili and Edward Castronova. Virtual Economies: Design and Analysis. MIT Press, 2014.

Lamport, Leslie, Robert Shostak and Marshall Pease “The Byzantine Generals Problem.”  ACM Transactions on Programming Languages and Systems 4, no. 3 (1982): 382–401.

Nagpal, Rohas.  “2020 AD – Planet Earth on a Blockchain.” 2017.

Nakamoto, Satoshi. “Bitcoin: A Peer-to-Peer Electronic Cash System.” ABC, 2009.

Naughton, John. “Is Blockchain the Most Important IT Invention of our Age?” 2016.

O’Byrne, W. Ian “What is Blockchain?” 2016.

Orcutt, Mike. “Why Bitcoin Could Be Much More Than a Currency.” 2015.

Pilkington, Marc “Blockchain Technology: Principles and Applications.” In Research Handbook on Digital Transformations, edited by F. Xavier Olleros and Majlinda Zhegu, 225-253. Cheltenham, UK: Edward Elgar, 2016.

Reijers, Wessel and Mark Coeckelbergh. “The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies.” Philosophy & Technology (2016): 1-28.

Simonite, Tom. “What Bitcoin Is, and Why It Matters.” 2011.

Smart Paul R. and Nigel R. Shadbolt. “Social Machines.” ePrints Soton: University of Southampton (2014):

Stagars, Manuel “Blockchain and Us.” 2017.

Swan, Melanie. Blockchain: Blueprint for a New Economy. Sebastopol: CA: O’Reilly, 2015.

Szabo, Nick. “Smart Contracts: Building Blocks for Digital Markets.” 1996.

Tapscott, Alex. “Blockchain is a Disruption We Simply Have to Embrace.” The Globe and Mail, May 9, 2016. embrace/article29936789/

Tapscott, Don and Alex Tapscott. “Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business and the World.” New York: Penguin, 2016.

Tapscott, Don and Alex Tapscott.. “The Blockchain Corridor: Building an Innovation Economy in the 2nd Era of the Internet.” The Tapscott Group, 2017.

UK Government Chief Scientific Adviser “Distributed Ledger Technology: Beyond Block Chain.” Government Office for Science, 2016.

Willms, Jessie. “Don Tapscott Announces International Blockchain Research Institute.” 2017.

[1] First launched under brand ‘Bitcoin’ by unknown person ‘Satoshi Nakamoto’ 03-01-2009.


[3] Berners-Lee writes of “interconnected groups of people acting as if they shared a larger intuitive brain,” defining social machines on the internet as “processes in which the people do the creative work and the machine does the administration.” (1999) Smart et al. provide an updated version: “Social Machines are Web-based socio-technical systems in which the human and technological elements play the role of participant machinery with respect to the mechanistic realisation of system level processes” (2013).

[4] A rather big assumption that deals with security, identity, justice, and other controversial issues.

[5] “If the Internet was the first native digital format for information, then blockchain is the first native digital format for value—a new medium for money. It acts as ledger of accounts, database, notary, sentry and clearing house, all by consensus. And it holds the potential to make financial markets radically more efficient, secure, inclusive and transparent.”—Alex Tapscott

[6] “Blockchain is a foundational technology: It has the potential to create new foundations for our economic and social systems” (Iansiti and Lakhani, 2017).

[7] Jeremy Clark, Concordia University, MIT talk, 2016.

[8] “[Blockchain] is a very important, new technology that could have implications for the way in which transactions are handled throughout the financial system.”—Janet Yellin (USA Federal Reserve Chairwoman).

[9] “A blockchain is a write-only database dispersed over a network of interconnected computers that uses cryptography (the computerized encoding and decoding of information) to create a tamperproof public record of transactions. Blockchain technology is transparent, secure and decentralised, meaning no central actor can alter the public record. In addition, financial transactions carried out on blockchains are cheaper and faster than those performed by traditional financial institutions.”—Government of Canada (“Blockchain Technology Brief,” 2016, page 1) / “A blockchain is a peer-to-peer distributed ledger forged by consensus, combined with a system for “smart contracts” and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints” ( “A blockchain is a decentralised, online record-keeping system, or ledger, maintained by a network of computers that verify and record transactions using established cryptographic techniques.”—Mike Orcutt (“Congress Takes Blockchain 101.” “A blockchain is a type of distributed ledger, comprised of unchangable, digitally recorded data in packages called blocks (rather like collating them on to a single sheet of paper). Each block is then ‘chained’ to the next block, using a cryptographic signature. This allows block chains to be used like a ledger, which can be shared and accessed by anyone with the appropriate permissions.” ( / Blockchain is “a magic computer that anyone can upload programs to and leave the programs to self-execute, where the current and all previous states of every program are always publicly visible, and which carries a Blockchain Technology: Principles and Applications very strong cryptoeconomically secured guarantee that programs running on the chain will continue to execute in exactly the way that the blockchain protocol specifies.”—Vitalik Buterin (Visions, Part 1: The Value of Blockchain Technology. Ethereum Blog.

[10] Don & Alex Tapscott 2017, page 4.


[12] Most of the literature so far focuses on BC economies and I am not aware of any papers so far written about BC societies.

[13] As soon as I mentioned that Peter Thiel’s Palantir is building a BC for the USA’s Department of Homeland Security (more on this in Part II), she realised the seriousness of the endeavour.

[14] “Blockchain [technology] has the potential to address some of the world’s most pressing challenges.”— Ross Mauri, IBM, quoted in Jessie Willms, “Don Tapscott Announces International Blockchain Research Institute”, Bitcoin Magazine, March 17, 2017.

[15] “There are plenty of reasons to be skeptical, and there’s way too much hype,” [Brian Behlendorf] said. “But it’s a real opportunity to change the rules of the game.” Brian Behlendorf quoted in Will Knight, “The Technology Behind Bitcoin Is Shaking Up Much More Than Money.” MIT Technology Review, 2017.

[16] “The term ‘blockchain technology’ means distributed ledger technology that uses a consensus of replicated, shared, and synchronized digital data that is geo-graphically spread across multiple digital systems.” (


[18] HT: Vytautas Kaseta (Private conversation, Vilnius, March 2017).


[20] “All cryptographic voting systems use a ‘bulletin board:’ an append-only broadcast channel (sometimes anonymous) … Blockchains are the best bulletin boards we have ever seen, better than purpose-build ones (esp. on equivocation).”—Jeremy Clark (“Blockchain based Voting: Potential and Limitations,” MIT talk, 2016).

[21] “Anything that would benefit from having information stored in an unchangeable database that is not owned or controlled by any single entity and yet is accessible from anywhere at any time.”—Mike Orcutt (


Author Information: Ben Ross, University of North Texas,

Ross, Ben. “Between Poison and Remedy: Transhumanism as Pharmakon.Social Epistemology Review and Reply Collective 6, no. 5 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Jennifer Boyer, via flickr

As a Millennial, I have the luxury of being able to ask in all seriousness, “Will I be the first generation safe from death by old age?” While the prospects of answering in the affirmative may be dim, they are not preposterous. The idea that such a question can even be asked with sincerity, however, testifies to transhumanism’s reach into the cultural imagination.

But what is transhumanism? Until now, we have failed to answer in the appropriate way, remaining content to describe its possible technological manifestations or trace its historical development. Therefore, I would like to propose an ontology of transhumanism. When philosophers speak of ontologies, they are asking a basic question about the being of a thing—what is its essence? I suggest that transhumanism is best understood as a pharmakon.

Transhumanism as a Pharmakon

Derrida points out in his essay “Plato’s Pharmacy” that while pharmakon can be translated as “drug,” it means both “remedy” and “poison.” It is an ambiguous in-between, containing opposite definitions that can both be true depending on the context. As Michael Rinella notes, hemlock, most famous for being the poison that killed Socrates, when taken in smaller doses induces “delirium and excitement on the one hand,” yet it can be “a powerful sedative on the other” (160). Rinella also goes on to say that there are more than two meanings to the term. While the word was used to denote a drug, Plato “used pharmakon to mean a host of other things, such as pictorial color, painter’s pigment, cosmetic application, perfume, magical talisman, and recreational intoxicant.” Nevertheless, Rinella makes the crucial remark that “One pharmakon might be prescribed as a remedy for another pharmakon, in an attempt to restore to its previous state an identity effaced when intoxicant turned toxic” (237-238). It is precisely this “two-in-one” aspect of the application of a pharmakon that reveals it to be the essence of transhumanism; it can be both poison and remedy.

To further this analysis, consider “super longevity,” which is the subset of transhumanism concerned with avoiding death. As Harari writes in Homo Deus, “Modern science and modern culture…don’t think of death as a metaphysical mystery…for modern people death is a technical problem that we can and should solve.” After all, he declares, “Humans always die due to some technical glitch” (22). These technical glitches, i.e. when one’s heart ceases to pump blood, are the bane of researchers like Aubrey de Grey, and fixing them forms the focus of his “Strategies for Engineered Negligible Senescence.” There is nothing in de Grey’s approach to suggest that there is any human technical problem that does not potentially have a human technical solution. Grey’s techno-optimism represents the “remedy-aspect” of transhumanism as a view in which any problems—even those caused by technology—can be solved by technology.

As a “remedy,” transhumanism is based on a faith in technological progress, despite such progress being uneven, with beneficial effects that are not immediately apparent. For example, even if de Grey’s research does not result in the “cure” for death, his insight into anti-aging techniques and the resulting applications still have the potential to improve a person’s quality of life. This reflects Max More’s definition of transhumanism as “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (3).

Importantly, More’s definition emphasizes transcendent enhancement, and it is this desire to be “upgraded” which distinguishes transhumanism. An illustration of the emergence of the upgrade mentality can be seen in the history of plastic surgery. Harari writes that while modern plastic surgery was born during the First World War as a treatment to repair facial injuries, upon the war’s end, surgeons found that the same techniques could be applied not to damaged noses, but to “ugly” ones, and “though plastic surgery continued to help the sick and wounded…it devoted increasing attention to upgrading the healthy” (52). Through its secondary use as an elective surgery of enhancement rather than exclusively as a technique for healing, one can see an example of the evolution of transhumanist philosophy out of medical philosophy—if the technology exists to change one’s face (and they have they money for it), a person should be morphologically free to take advantage of the enhancing capabilities of such a procedure.

However, to take a view of a person only as “waiting to be upgraded” marks the genesis of the “poison-aspect” of transhumanism as a pharmakon. One need not look farther than Martin Heidegger to find an account of this danger. In his 1954 essay, “The Question Concerning Technology,” Heidegger suggests that the threat of technology is ge-stell, or “enframing,” the way in which technology reveals the world to us primarily as a stock of resources to be manipulated. For him, the “threat” is not a technical problem for which there is a technical solution, but rather it is an ontological condition from which we can be saved—a condition which prevents us from seeing the world in any other way. Transhumanism in its “poison mode,” then, is the technological understanding of being—a singular way of viewing the world as a resource waiting to be enhanced. And what is problematic is that this way of revealing the world comes to dominate all others. In other words, the technological understanding of being comes to be the understanding of being.

However, a careful reading of Heidegger’s essay suggests that it is not a techno-pessimist’s manifesto. Technology has pearls concealed within its perils. Heidegger suggests as much when he quotes Hölderlin, “But where danger is, grows the saving power also” (333). Heidegger is asking the reader to avoid either/or dichotomous thinking about the essence of technology as something that is either dangerous or helpful, and instead to see it as a two-in-one. He goes to great lengths to point out that the “saving power” of technology, which is to say, of transhumanism, is that its essence is ambiguous—it is a pharmakon. Thus, the self-same instrumentalization that threatens to narrow our understanding of being also has the power to save us and force a consideration of new ways of being, and most importantly for Heidegger, new meanings of being.

Curing Death?

A transhumanist, and therefore pharmacological, take on Heidegger’s admonishment might be something as follows: In the future it is possible that a “cure” for death will threaten what we now know as death as a source of meaning in society—especially as it relates to a Christian heaven in which one yearns to spend an eternity, sans mortal coil. While the arrival of a death-cure will prove to be “poison” for a traditional understanding of Christianity, that same techno-humanistic artifact will simultaneously function as a “remedy,” spurring a Nietzschean transvaluation of values—that is, such a “cure” will arrive as a technological Zarathustra, forcing a confrontation with meaning, bringing news that “the human being is something that must be overcome” and urging us to ask anew, “what have you done to overcome him?” At the very least, as Steve Fuller recently pointed out in an interview, “transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection.” For those sympathetic to Leon Kass’ brand of repugnance, such suggestions are poison, and yet for a transhumanist such suggestions are a remedy to the glitch called death and the ways in which we relate to our finitude.

A more mundane example of the simultaneous danger and saving power of technology might be the much-hyped Google Glass—or in more transhuman terms, having Google Glass implanted into one’s eye sockets. While this procedure may conceal other ways of understanding the spaces and people surrounding the wearer other than through the medium of the lenses, the lenses simultaneously have the power to reveal entirely new layers of information about the world and connect the wearer to the environment and to others in new ways.

With these examples it is perhaps becoming clear that by re-casting the essence of transhumanism as a pharmakon instead of an either/or dichotomy of purely techno-optimistic panacea or purely techno-pessimistic miasma, a more inclusive picture of transhumanist ontology emerges. Transhumanism can be both—cause and cure, danger and savior, threat and opportunity. Max More’s analysis, too, has a pharmacological flavor in that transhumanism, though committed to improving the human condition, has no illusions that, “The same powerful technologies that can transform human nature for the better could also be used in ways that, intentionally or unintentionally, cause direct damage or more subtly undermine our lives” (4).

Perhaps, then, More might agree that as a pharmakon, transhumanism is a Schrödinger’s cat always in a state of superposition—both alive and dead in the box. In the Copenhagen interpretation, a system stops being in a superposition of states and becomes either one or the other when an observation takes place. Transhumanism, too, is observer-dependent. For Ray Kurzweil, looking in the box, the cat is always alive with the techno-optimistic possibility of download into silicon and the singularity is near. For Ted Kaczynski, the cat is always dead, and it is worth killing in order to prevent its resurrection. Therefore, what the foregoing analysis suggests is that transhumanism is a drug—it is both remedy and poison—with the power to cure or the power to kill depending on who takes it. If the essence of transhumanism is elusive, it is precisely because it is a pharmakon cutting across categories ordinarily seen as mutually exclusive, forcing an ontological quest to conceptualize the in-between.


Derrida, Jacques. “Plato’s Pharmacy.” In Dissemination, translated by Barbara Johnson, 63-171. Chicago: University of Chicago Press, 1981.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. HarperCollins, 2017.

Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell. Harper & Row, 1977.

More, Max. “The Philosophy of Transhumanism,” In The Transhumanist Reader, edited by Max More and Natasha Vita-More, 3-17. Malden, MA: Wiley-Blackwell, 2013.

Rinella, Michael A. Pharmakon: Plato, Drug Culture, and Identity in Ancient Athens. Lanham, MD: Lexington Books, 2010.

Author Information: Ilya Kasavin, Russian Academy of Science,

Kasavin, Ilya. “Why so Romantic and A Priori? A Reply to Bakhurst and Sismondo.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 20-22.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Yakub Annanurov

It is my pleasure and privilege to respond to the critical comments on my paper provided by David Bakhurst and Sergio Sismondo (2017). These comments represent a clever combination of significant knowledge of both STS and Russian philosophy—a rare occurrence. Bakhurst and Sismondo help me realize that my style of discourse relies, perhaps, too much on tacit knowledge and shared opinions that should be articulated in order to serve if not as an additional argument then, at least, as an apology.

Toward a New Agenda

I am aware that the idea of searching for a new agenda in the philosophy of science and STS, which appeals to the Russian tradition (even putting aside Russian religious philosophy as I do), is an ambitious task and might look too brave. Yet, mainstream philosophy does pursue such ambitious agendas—one might consider John Stuart Mill, the Vienna Circle, Karl Popper, Willard Quine, Thomas Kuhn—and well-elaborated concepts interpreted, reinterpreted, and developed by contemporary scholars. French historical epistemology and German Neo-Kantianism are much less popular. Surprisingly, the same is true in the case of William Whewell who launched the program of historically-oriented philosophy of science over one hundred years before Kuhn. Still, Whewell remains largely forgotten in the shadow of Mill, his liberal rival.

A similar lack of attention to the Russian tradition in the philosophy of science also makes it difficult to provide clear guidelines for extracting a kind of unified picture of science, or knowledge, out of the works of Russian thinkers. Hence, my efforts to compose a more or less unified pool of Russian scholars for my purpose might look implausible. And this moves Bakhurst and Sismondo to assert that “Russian cosmism, for example, is a million miles from Ilyenkov’s Marxism” (21). My counter-argument for this case is as follows. Pantheism builds the common historical roots for Russian cosmism as well as for Hegel who inspired the version of Marxism elaborated by Ilyenkov. This is a crucial point for the “objective ideal forms” (Ilyenkov) and “noosphere” (Vernadsky) that seem to be very close to one another. Also, cosmism and Marxism might be portrayed by someone like Popper, from the perspective of his gradual social engineering, for their faith in long-term social forecasting, which serves a basis of every global project. It would be naïve to justify a theoretical unity of Russian philosophical tradition using a thorough historical/philosophical analysis. Still, the Russian thinkers I mention share a holistic view of human knowledge that might be well dubbed “integral knowledge”.

Bakhurst and Sismondo are quite right pointing out the origin of “integral knowledge” concept in Ivan Kireevsky’s works. Nevertheless, I appeal to this concept in the later interpretations by Shpet—where it is released from any religious meaning. Following this interpretation, I propose an expanded concept of knowledge and the corresponding expansion of epistemological subject matter. According to the latter, every conscious phenomenon (perceptions, notions, beliefs, values, norms, ideals etc.) and, moreover, every cultural and social artifact have epistemic content. This notion leads beyond the limits of classical epistemology which continues to define knowledge as justified true belief (in spite of Gettier problems). I am sure that one needs an expanded concept of knowledge to deal with global projects (large technosocial units) within STS. Thus, appealing to “integral knowledge” is a normative rather than a descriptive stance; it is primarily a requirement of the current development within the “social philosophy of science” than an extraction from the history of (Russian or whatever) thought.

On Case Studies

I share the critical evaluation of what Bakhurst and Sismondo call “whiggish accounts” (21) of science (the “armchair image” of science also applies), which is typical in some aspects of analytical epistemology. The best representatives of Russian philosophical tradition were proponents of a historical/sociological vision of science and also dealt with case studies (Boris Hessen). So, I have no doubt in case studies as a significant means of philosophy of science seeking an empirical foundation. Moreover, there should be no bias between philosophy, on the one side, and history and sociology of science, on the other side; such a boundary looks obsolete. Nevertheless, many case studies (perhaps it is better to call them “empirical studies”) have very little theoretical/philosophical outcome, or their outcome is trivial. (I won’t mention here any names in order to avoid an unnecessary quarrel.) And I am sure these cases can stimulate a vivid interdisciplinary interaction, especially if philosophers get involved in their interpretation. Still, there are brilliant examples of a different kind, case studies that provide real theoretical progress and serve as the gold standard for STS research (works by Harry Collins, Steven Shapin, Karin Knorr-Cetina and Peter Galison among others) that justifies the constructivist and anti-cumulativist view of science. Perhaps the expanding community of STS empirical researchers should be more alive in practice to case studies that follow such standards.

As to my Karakum Canal research, which I did exactly as a standard case-study, there was no place in the general article in Social Epistemology for the detailed historical and sociological evidence. I might refer here only to my paper[1], where one finds some more empirical evidence based on rare Russian sources in the Karakum Canal history and in-depth interviews with specialists in hydrogeology and hydraulic engineering. Actually, such a huge artifact like Karakum Canal altogether can hardly be a subject matter of a case study, though most of my empirical evidence deals only with the first four years of its history. Moreover, Bakhurst and Sismondo might be correct in pointing out certain “romantic” and “a priori” elements in my attitude. These elements will be more understandable in terms of the current discussions between Russian economists, who contrast a social-engineering approach (Alexej Kudrin who supports financiers following Georgii Schedrovitsky’s ideas) with a global project approach (Ruslan Grinberg who acts in favor of “industrialists”) in search for a state strategy for economic growth. In this framework, the Karakum Canal history acquires a more normative, than descriptive, meaning in the Russian context going beyond STS towards the social philosophy of science and technology. But this is the other side of the coin.


Bakhurst, David and Sergio Sismondo. “Commentary on Ilya Kasavin’s ‘Towards a Social Philosophy of Science: Russian Prospects’.” Social Epistemology Review and Reply Collective 6, no. 4 (2017): 20-23.

Kasavin, Ilya. “Towards a Social Philosophy of Science: Russian Prospects.” Social Epistemology 31, no. 1 (2017): 1-15.

Kasavin, Ilya. “Mega-Projects and Global Projects: Science Between Utopia and Technocracy.” Voprosy filosofii 9, (2015): 40-56 (in Russian).

[1] “Mega-Projects and Global Projects: Science Between Utopia and Technocracy.” Voprosy filosofii 9, (2015): 40-56 (in Russian).

Author Information: Justin Cruickshank, University of Birmingham,

Cruickshank, Justin. “Meritocracy and Reification.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 4-19.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: russell davies, via flickr


My article ‘Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practice’ rejected the notion that Popper was a dogmatic liberal technocrat who fetishized the epistemic authority of science and the epistemic and ethical authority of free markets. Instead, it stressed how Popper sought to develop the recognition of fallibilism into a philosophy of dialogue where criticism replaced appeals to any form of dialogue-stopping source of epistemic and ethical authority. The debate that followed in the SERRC, and the book based on this (Democratic Problem-Solving: Dialogues in Social Epistemology), ranged over a number of issues. These included: the way prevailing traditions or paradigms heavily mediate the reception of ideas; whether public intellectuals were needed to improve public dialogue; the neoliberal turn in higher education; and the way neoconservatism is used to construct public imaginaries that present certain groups as ‘enemies within’.

Throughout these discussions Ioana and I / I argued against elitist and hierarchical positions that sought to delimit what was discussed or who had the right to impose the terms of reference for discussion, based on an appeal to some form of epistemic-institutional source of knowledge. Horizontalist dialogue between different group tackling problems caused by neoliberalism and prejudice was advocated in place of vertical instruction where an expert or political elite set the terms of reference or monologically dispensed ideas for others, presumed to be passive and ignorant, to accept. Our main interlocutor, Raphael Sassower, put more emphasis on appeals to epistemic-institutional sources of authority, with it being argued, for instance, that public intellectuals were of use in shaping how lay agents engaged with the state.

One issue implicitly raised by this was that of whether a meritocracy, if realised, would make liberal capitalism legitimate, by removing the prejudice and structural disadvantage that many groups face. I argue that attempts to use this concept to legitimise liberal capitalism end up reifying all agents, no matter what their place in the status-class hierarchy. This reification undermines the development of a more dialogic democracy where people seek to work with others to gain control over the state and elites. In place of both well-meaning and more cynical—neoliberal—appeals to meritocracy rewarding educational-intellectual performance with a well-paid job, it is argued that the focus needs to be on critical pedagogy which can develop a more dialogic education and democracy. Such an approach would avoid reification and the legitimisation of existing hierarchies by rejecting their claims to epistemic and ethical authority.

Education, Economics and Punishment

Protestors responding to Edward Snowden’s 2013 revelations about the extent of surveillance carried out by the NSA (National Security Agency) held placards saying that Orwell’s ‘1984’ was a warning and not an instruction manual. Decades earlier the socialist, social reformer and Labour MP Michael Young witnessed a change in language akin to seeing the phrase ‘double plus good’ come into popular use to appraise government policies.

Young supported many successful educational reforms. These included the reduction in ‘grammar schools’, which are state schools that select pupils, and the removal in most counties of the ‘11-plus’, which was the intelligence test used to select a small number of pupils for grammar schools. While children at grammar schools studied A levels and were expected to go to university, the rest were expected to leave school at 16 for unskilled jobs or, if they were lucky, apprenticeships. For critics of the 11-plus and grammar schools, the test’s objectivity was at best moot and its consequence was to reinforce not just an economic hierarchy but an affective-symbolic status hierarchy too. The majority who did not go to grammar school and university were constructed as ‘failures’ who deserved to be in subordinate positions to those constructed as ‘naturally’ superior.

Given the very small number of working class pupils who passed the 11-plus the consequence of this was to legitimise the existing class hierarchy by presenting the middle class and upper class as naturally superior. Interestingly, following the drive to create a mass higher education system in the UK, it has become evident that pupils from ‘public schools’ (that is, fee paying schools) often fare worse at university than working class pupils with the same or similar A level grades, because the latter got the grades with fewer resources spent on them. Such working class pupils would have attended ‘comprehensive’ (non-selective) state schools (Preston 2014).

Young also helped establish the Open University (OU), which allowed mature students to study for a degree. Many of students of the OU, who successfully graduated, had failed their 11-plus, and the OU attracted academics regarded as famous intellectuals, such as Stuart Hall, to design courses and deliver some of lectures, which were broadcast on BBC2 (a state-owned TV channel dedicated to ‘high cultural’ and education programming). Unfortunately though, recent studies have confirmed that class bias exists in job selection.

Graduate job applicants are judged in terms of their accent and mannerism, with ‘posh’ behaviours being taken as a proxy for intelligence or at least a signal that the applicant will be easier to work with than someone lacking the same ‘cultural capital’ (Weaver 2015). Furthermore, rich parents are able to fund their student-offspring’s unpaid internships in corporations and government in expensive cities, creating networking opportunities and a way to build experience for the CV, with this ‘social capital’ being impossible for students who do not come from rich backgrounds (Britton, Dearden, Shephard and Vignoles 2016). People may respond to this by calling for an increased ‘meritocracy’ but Young, who coined the term ‘meritocracy’, was opposed to the idea of legitimising a new hierarchy.

As a warning about a society that wished to eschew egalitarianism, he wrote ‘The Rise of the Meritocracy: 1870-2033’ (1994 [1958]). This was a satirical vision of a dystopian future where class hierarchy based on a small number of extremely rich individuals having power because of inherited wealth and privilege had been replaced by a hierarchy based on ‘merit’ defined as intellectual ability plus effort. Young coined the term ‘meritocracy’ to define the latter type of society. While this is now considered an honorific term, Young used his neologism in a pejorative way. Obviously Young saw the existing class hierarchy based on privilege as illegitimate, but for him replacing it with a meritocracy was unacceptable because such a society would end up with a rigid and absolute hierarchy between those seen as the deserving powerful and rich and those seen as the undeserving mass with no power and money. A meritocracy would be ruthless and inhuman for segregating people and defining many as biologically inadequate and of lower worth in every sense than others.

In his book the narrator describes how with no intelligent leaders the lower classes will remain at worst sullen or rebellious in a directionless way which the police ‘with their new weapons’ will be able to repress efficiently. Those not in the elite are spoken of in a dehumanised way as chattel which reflects the way the old class privileged elite saw the working and middle classes, having been socialised at private school and Oxbridge into the view that the upper class were entitled to rule. At the end of the book the publisher inserts a footnote to say the narrator has been killed at Peterloo. Young then saw the defining concept of his dystopian vision become used by all mainstream politicians and commentators to assess policies and the normative aspirations that were meant to inform them. He was particularly incensed by the way New Labour under Tony Blair endorsed the principle of meritocracy.

In 2001 Young wrote an article for the Guardian entitled ‘Down with Meritocracy’ where he lambasted those who had been selected through meritocratic education, as he saw it, arguing that they were ‘insufferably smug’ and so self-assured that ‘there is almost no block on the rewards they arrogate to themselves’ (Young 2001). He hoped for New Labour to use progressive taxation to tackle the greed and power of the new meritocratic elite but realised that would mark a big change away from New Labour’s pro-capitalist values.

Against Young, I hold that while progressive policies narrowed the income gap in the post-war years, class privilege and widening income inequality has defined UK capitalism since the rise of neoliberalism in the late 1970s. In graduate recruitment the top jobs are allocated not on grades—or ‘merit’—alone but on class background and internship experience that can only be attained from a rich background, and the amount of wealth accrued by those in the top 1% is increasing, while others have a reducing share of national wealth (Harvey 2005). Indeed, middle class jobs are now becoming precarious with a lot of people in both the middle and working class being forced to become self-employed and pay for their own training, have no sick pay and holiday pay, and be entirely responsible for their pension, etc. This is presented as liberating employees but is a sign of the current weakness of deunionised labour to resist the imposition of insecurity following a recession (Friedman 2014).

After Young’s death, the university fee was tripled to £9000 (in 2012) and the government’s accountants estimate that around half of these loans will not be repaid in full (McGettigan 2013). The changes to higher education did not end there and the current Conservative government hoped to ‘liberalise’ the ‘market’ in higher education, in England, by encouraging the extensive start-up of for-profit providers, through deregulation, despite the problems this created in the US, which the Harkin Report catalogued. Resistance from the House of Lords and a desire to push the legislation through Parliament before it is prorogued for the hastily called General Election in May—widely seen as a vote on the Conservative vision of Brexit—led to compromise, with new providers still needing to be validated by existing providers.

The Conservatives also set up a new audit regime to measure teaching ‘excellence’, called the ‘Teaching Excellence Framework’ (TEF). This will measure teaching quality in part by using employment data and data from the National Student Survey (NSS), completed by all third-year undergraduates, despite the NSS being specifically not designed to be used in a comparative way, with NSS data not furnishing meaningful deviations from the mean (Cheng and Marsh 2010; HEFCE 2001). A high TEF score would then be used to permit universities to raise the tuition fee in line with inflation (for discussion of these proposed changes see: Cruickshank 2016, 2017; Holmwood 2015a, 2015b).

The Lords argued that the fee increases had to be decoupled from TEF scoring. In a compromise, the fee can rise with inflation every year, for institutions who will take part in the TEF, with no link to a TEF score, until 2020, at which point full-inflation rises will be connected to TEF scoring. One possible consequence of linking scores to fee increases is that grade inflation will continue and that recruitment will be biased towards students from more privileged backgrounds, in Russell Group universities and middle ranking universities, with such students being seen as more likely to be employment ‘successes’, thanks to class privilege.

Such moves mark an intensified attempt definitively to redefine students as customers. For the Conservatives, and those in Labour who supported the Browne Review that led to the tripling of the fee, and the Liberal Democrats who were in coalition with the Conservatives, education and especially higher education, are to be defined in terms of customers making ‘investments’ in their ‘human capital’, to gain market advantage over other students competing for jobs. Education was not to be see in terms of being good in itself and good for fostering the development of critical and informed public.

For many politicians, students were not to see education as a ‘public good’ (with a well-informed critical public benefitting democracy) but a ‘positional good’ in market competition (Holmwood 2011). Brown (2015) argues that under neoliberalism a market rationality becomes ubiquitous with domains outside market competition being defined as analogous to competitive market relations. Here though education is redefined as being an actual commodity for instrumental use in competitive market relations between customers of human capital seeking advantage over each other. All of this is quite overt in the government’s Green Paper and White Paper on changing higher education. In these documents, it is made clear that customers are expected—and ‘enabled’, by the changes proposed—to make the correct investment in their human capital.

Customers will have TEF data and government controlled price signals to go on when it comes to judging the usefulness of a human capital investment and they will be further enabled by having a greater range of providers to choose from, with for-profits offering lower priced vocational training degrees, assumed to be more attractive to potential customers in ‘hard to reach’ disadvantaged communities. The government documents also make it clear that the customer is to be of use to the economy and to not be a burden by being underemployed and failing to pay back all their student loan (for discussions of these issues see: Collini 2012, 2017; Cruickshank 2016; the debates between Cruickshank and Chis, and Sassower, in Cruickshank and Sassower 2017; Holmwood 2011; Holmwood, Hickey, Cohen and Wallis 2016).

Obviously, there is a tension with such arguments. On the one hand, the market is seen to be a way to realise a meritocracy, with customers investing in the right human capital to succeed in a zero-sum competition with fellow customers. On the other hand, the market is not so much just a means to realise meritocracy for the benefit of competitive individuals, but is instead an end in itself that individuals need to support with correct investment choices. The consequence of this is that if individuals are unemployed or underemployed, it is due to a personal failure to make the right investment choice. Moreover, if the individual is unemployed or underemployed it is not just deemed a matter of personal failure but a matter of their supposed fecklessness harming all by undermining economic productivity. Failure to make the right human capital investment is deemed a moral failure by the customer who eschewed the information provided by the audit regime to pursue a whim, with this costing the economy as a whole.

As part of this narrative, the Conservatives clearly state that the economy needs more ‘STEM’ (science, technology, engineering and maths) graduates, and thus less humanities or social science graduates. To be an unemployed or underemployed sociology or philosophy graduate is thus, with the Conservatives’ view, to be a feckless consumer. Despite the emphasis on objectivity in STEM subjects, it is ironic to note that the Conservative’s case about a lack of STEM graduates undermining economic performances rests on a problematic use of tiny literature and ignores the fact that the subject with the highest unemployment rate is computer science (Cruickshank 2016). It is also worth noting that many MPs and leading figures in journalism and broadcasting studied PPE (philosophy, politics and economics) at Oxford having attending expensive elite public schools.

An increasingly punitive approach, which from an economic point of view is dysfunctional, is now being pursued against individuals deemed to have failed in their moral duty to serve the economy. Contrary to the liberal fear of dogmatism stemming from normative commitments to ends, the Conservatives (and many in Labour too), hold that the end justifies the means, and so the end of protecting the economy—which is an end in itself—is taken to justify means that undermine the economy. If the Party want 2 + 2 = 5 it will obtain.

William Davies, in a recent article in the New Left Review, explored how the latest phase of neoliberalism engages in increasingly severe punishments for being unemployed. For Davies (2016), what he terms ‘neoliberalism 3.0’ (following earlier phases of establishing and then normalising neoliberalism), is defined by its vengeance against those deemed to have failed. Policies such as ‘sanctioning’ welfare claimants (that is, removing benefit payments for a period of weeks or months) for trivial problems, such as arriving 5 minutes late to an interview with a welfare bureaucrat, even if the lateness was not their fault, do nothing to increase economic productivity but are relentlessly pursued. Such policies prevent people from entering the labour market, because the removal of benefits creates severe stress and requires time to access foodbanks and appeals processes, in place of job hunting and being well enough to attend job-interviews. Nonetheless, sanctions are continually being imposed as extremely punitive punishments to make life far worse for those already experiencing hardship. As Davies puts it:

In contrast to the offence against socialism [in the 1980s], the ‘enemies’ targeted now are largely disempowered and internal to the neoliberal system itself. In some instances, such as those crippled by poverty, debt and collapsing social-safety nets, they have already been largely destroyed as an autonomous political force. Yet somehow this increases the urge to punish them further (2016, 132).

Conservative rhetoric sought to demonise those receiving benefits, defining them as the ‘shirkers’ who get housing and money given to them by the welfare state as a reward for fecklessness, which punished the ‘hard working strivers’ who ‘got up early to see the curtains still closed in the house of the shirker claimants’ who they supported with their taxes. Working people were encouraged to feel nothing but resentment and hate towards the unemployed by the Conservatives.

Let’s explore two tensions in contemporary neoliberalism. First, there is the tension between technocracy and affect. On the one hand, neoliberals seek to reduce normative political questions about reforms to ‘value-neutral’ / technocratic questions about regulating objective market forces. Critics are quick to point out that neoliberalism is itself a normative position, with a value driven commitment to corporate capitalism being facilitated by state policies and spending, contrary to the anti-interventionist / free market rhetoric (see for instance: Davies 2014; Van Horn and Mirowski 2009). One example of this is the rise of private prisons in the US. Here in the UK, the state has been tendering out NHS services to corporations and the Conservatives hoped to reconstruct the market in higher education to facilitate for-profits. All of which means that neoliberalism is another form of interventionism (Cruickshank 2016). Furthermore, any notion of market forces ever being objective sui generis forces is erroneous given that they always already presume a legal and political framework, and certain sets of social expectations about contractual relations and the importance of work to define selfhood in modernity, etc. On the other hand, the claim about the need for politics to be reduced to the technocratic administration of objective market forces sits alongside the state constructing imaginaries that are meant to generate emotional and even visceral appeal.

Individuals are encouraged not only to resent and hate those classed as moral failures who failed to serve the economy, but to recognise their moral responsibility to serve the economy and be happy. Individuals need to be happy so as to be ever more efficient at work. Happiness is meant to increase despite the increase in job insecurity with the rise in temporary contracts and the use of self-employed contractors replacing salaried staff (for discussion see for instance: the debates in Cruickshank and Sassower 2017; Davies 2014, 2015, 2016). An affective hierarchy is sought whereby ‘winners’ for the moment despise ‘losers’ and feel happy to fulfil their moral duty as winners to serve the economy, while also feeling ever more insecurity which cannot be allowed to turn into anxiety and depression, for that may result in a winner becoming a despised loser. People are to be broken up into discrete bits, with insecurity boxed off from happiness.

Second, the political statements and policies from the Conservatives are contradictory, with the economy being a meritocratic means to serve individuals, an end in itself, and an end that is to be protected by attacking ‘failures’ in a way that undermines the economy. From a technocratic point of view, the punitive policies are problematic and contradict the notion of objectively managing ‘market forces’. However, neoliberalism is not just normative, rather than value-neutral, but affective too. This means that markets are not expressions of ‘human nature’ but are engineered with the view that they ought to serve corporate interests and that people need to be affectively engineered to fit such markets. Such affective engineering means gearing up their emotions to make them want to be happy-efficient means for corporate profit making (as more productive employees), and making them reduce worth to financial worth, with losers seen as less-human / non-human / ‘worthless’ objects of hate.

Orwell was right to hold that controlling language helps control thought. More than this though, demonising language and punitive policies can combine to control thought. Control here can be manufactured by not only seeking to preclude criticism of the state’s treatment of people, by setting the terms of reference in an argot of morally correct winners and people who choose moral-economic failure, but also by removing any affective motivation to see those demonised as people in need of ethical-political defence from punitive policies. Preventing some people from entering the labour market may undermine the espoused focus on pure market efficiency but it will not damage corporate profit making, which always has a ready supply of labour, and does allow for more effective corporate plutocracy through affective divide and rule.

The normative end of serving the corporate economy is served by creating affective hierarchies to preclude unity and make people fearfully seek happiness. One way of thinking about this is to see it in terms of a cost—benefit analysis, where the small cost of undermining the employment potential of those seeking work is outweighed by the benefit of undermining protest by presenting the losers in the game rigged for corporate victory as objects of hate beyond dialogue and recognition as fellow democratic subjects.

People would seem to have to live in a state of severe double-think, embracing the moral injunction to be happy to serve the economy whilst hating those moral and economic failures who failed to serve the economy, in a condition of increasing precarity in the labour market. Such a condition has to entail quite a high degree of cognitive dissonance. All of this is rendered feasible by a process of reification. People defined by politicians and the right-wing press as voiceless moral failures who failed to serve the economy become demonised objects, with their existence as subjects that have complex histories in difficult times being occluded, while government policies reduce welfare and serve corporate profit-making. The process of reification does not stop there.

Other workers come to be perceived as threatening objects, in a ceaseless competition. And, thanks to social media, now complemented by the drive to require happiness to be efficient at work, other employees / colleagues and even friends are seen as threatening objects reduced to their expressions of happiness: lives are reduced to discrete representations of happiness via uploaded photos on ‘Facebook’ and their ‘likes’, and the presentation of the happy-efficient self, using technology to self-quantify exercise to maximise happiness and efficiency, etc. Ultimately those deemed to be winners become reified too for they are not ends in themselves but defined as of worth solely as a means for the economy to prosper.

As a cog in a machine they have value and just as a broken cog is worthless junk because it has no value in itself, so too people that cease to be deemed of use to the economy are deemed worthless. This reification can enable the fragmented self to continue in a less than secure environment and to accept Conservative policy as an affective whole even if it has practical contradictions, for the reified self would take the statements and policies of the Conservatives at face value, rather than seek out contradictions. The only way to avoid being demonised is to be validated as a happy and efficient employee, defining oneself purely as a means, and not questioning the source of validation. The state becomes an ethical authority, rolling back the self and making the private self contingent on public politics and the corporate interests behind this. In all of this the economy too becomes reified, with the human relations and exploitation involved being occluded, as the economy becomes an ethical object, which the state serves, gaining its ethical authority from serving this object of veneration.

While liberals reject the charge from Marxists and anarchists that the state serves the economy, neoliberalism, especially in its 3.0 punitive form, does come to present the state as deriving its legitimacy from ethically serving the economy, although in this form, the actual economic relations between corporations and politics is obscured. Accepting the ethical authority of the state means the reified winner can self-perceive not as a subject with affective bonds and socio-economic similarities to other subjects, but as an object cut off from other threatening objects or objects to despise, with happiness and worth gained from its conformity to the demands of the economy.

Naturalising Hierarchy

Recently (10th April 2017) BBC Radio 4 broadcast a documentary called ‘The Rise and the Fall of the Meritocracy’ which sought to assess the contemporary relevance of Michael Young’s book. The programme was written and hosted by Toby Young—Michael Young’s son—who writes for the ‘Spectator’ (which he also co-edits), the ‘Telegraph’ and the ‘Daily Mail’, all of which are right-wing publications. Before exploring Toby Young’s argument, it is useful to situate his approach to meritocracy by sketching out how liberalism has sought to justify the existence of chronic poverty alongside capitalism creating enormous wealth for a few.

For liberals, liberalism is legitimate because it affords equality of opportunity. The existence of chronic poverty and unemployment throughout the history of liberal capitalism therefore raises a difficult issue for liberals. For if prejudice precludes people getting jobs, or if the economy systematically fails to produce sufficient job opportunities, then the legitimacy of liberalism is heavily compromised or negated. Nineteenth century liberals dealt with this by holding that there was a biologically defective underclass given to sloth, crime, addiction and sexual irresponsibility, and unable to face the discipline of work. While philanthropy was to be extended to the ‘respectable working class’ who would work hard, the ‘unrespectable working class’ (the underclass), were to be denied charity and also even subject to sterilisation (indeed, sterilisation policies continued well into the twentieth century). Why a legitimate system for distributing wealth meant that the ‘respectable working class’ needed charity in addition to work was an issue left unaddressed.

For later neoliberal politicians and commentators, from the 1970s and 1980s onwards, the answer to the question as to why chronic poverty existed was that a deviant underclass subculture had been created by the welfare state, which socialised children into welfare dependency. Here a supposed lack of discipline in the home, caused by a lack of a working father and children being raised by a mother on welfare, was taken to lead to educational and then employment failure, the pursuit of immediate gratification with drugs, alcohol and sex, and crime to pay for the drugs and alcohol. Concepts of the underclass are used to reinforce patriarchy as well as the class structure. In the UK, the Conservatives argued that the state and especially the welfare state, had to be ‘rolled back’ to undermine the development of a ‘something for nothing culture’ whereby people wanted welfare in place or working.

Charles Murray is the main ideologue for the neoliberal policy of removing the welfare state. Murray argued in Losing Ground (1984) that people in the US used to work their way up from very low paid jobs to jobs with better incomes until well-meaning policies made it more economically worthwhile to claim benefits in place of work. Murray used a thought-experiment to discuss this and his claims about the real value of welfare increasing are disputed (Wilson 1990). For Murray, those choosing welfare over work were just as rational as other individuals, because they were just responding to external economic stimuli. Although Murray does not explicitly discuss rational choice theory (RCT), his position is a form of RCT and as critics of RCT hold, it is determinist, which would remove any normative component from the theory, contrary to his intentions. People are seen, in effect, as automata that react to positive and negative reinforcement stimuli, with the former being those that enable the most efficient way to realise ‘utility’, or material self-interest.

Murray’s definition of the initial choice for welfare over work as being as rational as other individuals’ behaviours was though disingenuous because Murray wanted to put the blame on reformers for not understanding how humans were motivated, so as to then argue that reformers had created an underclass that was welfare-dependent, criminal and intrinsically less able that other people. Do-gooder reformers failed to understand human nature, for Murray. One group he despised were called normal so as to strengthen his critique of another group he despised. In his highly controversial book ‘The Bell Curve: Intelligence and Class Structure in American Life’ (1994), co-authored with Herrnstein, the argument was made that innate intelligence rather than parental social class or environmental factors were the best predictor for economic success or failure. Murray and Herrnstein also argued that ‘racial’ differences were to be accounted for in terms of differences in intelligence with the authors also trying to avoid controversy by putting in the caveat that environmental factors may play a role as well. Murray also came to the UK and wrote in ‘The Emerging British Underclass’ (1990) about a deviant sub-culture in the UK socialising children into welfare dependency and crime, in place of any tacit reference to RCT.

Unsurprisingly sociologists have rejected the concept of the underclass in general and Murray’s arguments in particular. For most sociologists, the concept of an underclass is an ideological and not a social scientific concept, because there is no empirical basis to hold that the cause of poverty is a deviant sub-culture caused by welfare dependency. This concept homogenises people into a category that has been developed specifically to demonise them. Against the view that chronic unemployment and poverty are a result of welfare dependency, it is often argued that the deindustrialisation that started in the 1980s which created major structural unemployment is the main cause of contemporary chromic poverty (see MacDonald and Marsh 2005 for a good discussion of these issues).

Toby Young began his programme by presenting the votes for Trump and Brexit as populist revolt against elites. Michael Sandel was then interviewed, and he spoke of the ‘meritocratic hubris’, whereby the rich and powerful smugly present themselves as deserving winners and the rest, implicitly at least, as losers. For Young, the recent populist revolt was to be seen as similar to that envisioned by his father, with the masses reacting to the meritocratic hubris of the economic elite. Presenting the vote for Brexit in such a way is problematic though because seeing these events as a populist reaction to elites by those resenting their implicit or explicit classification as losers, misses the point that many who voted for Brexit were not struggling financially but were older, more affluent voters in the south of England—traditional ‘Tory voters’ in Conservative safe seats (Hennig and Dorling 2016).

Such voters were responding to hierarchies but were not seeking to challenge the existing hierarchy but reinforce it, by ‘taking back control’ of UK borders by keeping immigrants and migrants out. Prior to the referendum for Brexit, the Conservatives had done much to inscribe a neoconservative imaginary that presented Muslims as an internal threat and immigrants, migrants and refugees as an external threat. The Conservatives had been aided in this by the right-wing tabloids, especially the Daily Mail, which Toby Young writes for. It is odd that someone discussing rule by a cognitive elite defines a populist reaction against elites in terms of more wealthy voters influenced by a paper he writes for supporting a policy many in the Conservative party championed.

Again, we can speak of reification, for while those politicians supporting the vote for Brexit, and the tabloid press, presented people from outside the UK as threatening objects, the ‘left-liberal’ media reacted by presenting immigrants and migrants as objects of use for the economy, or as refugee objects of pity. The terms of reference on all sides of the debate set up a dualism between a national subject within the national borders and external objects beyond those borders, with the main debates then being whether those objects were a threat or not, or of use or not. And of course, as a narrative device in the programme, those deemed losers in a competitive, meritocratic, society were reduced to being threatening objects. Economic losers became demonised as a potentially threatening enemy within, following Conservative rhetoric in the 1980s about the victims of deindustrialisation in the north of England being enemies within (for discussion of this see Bruff 2014; Cruickshank and Sassower 2017; Hall 1983). The question posed by the programme became how can we justify the unequal outcomes of a meritocracy and deal the threat of violence from the losers, defined as an homogenous mass of people lacking ability and intelligence? All of which was an unacknowledged return to Thatcher’s ‘authoritarian populism’ that demonised the northern industrial unionised working class who needed to be defeated to move to a post-industrial deunionised low pay service sector neoliberalism (Hall 1983). Against these, Thatcher sought to mobilise support from those, including those in the southern working class, who identified as ‘middle class’, aided in this by the selling off of social housing to tenants, and selling of the nationalised industries with people encouraged to buy shares. Divide and rule with the winners despising the losers was the name of the game as Thatcher began the process of creating a welfare state for the rich, through tax policy and undermining benefits.

After using Sandel to pose the problem of populist revolt, Young then interviewed Peter Saunders (a controversial right wing sociologist who had argued that private property ownership was ‘natural’), Charles Murray, Rebecca Allen (an economics academic currently running an education think-tank called ‘Education Datalab’), Oliver James (a psychologist) and Robert Plomin (a geneticist and expert on intelligence). With the token exception of James, all of these people supported the idea that socio-economic success was about half due to nature, meaning inherited intelligence from intelligent and rich parents, and half due to nurture, with the latter being connected to the former, due to successful parents creating the most conducive environment for the children to succeed. After interviewing Allen, Saunders and Murray, none of whom are scientists, Young interviewed James, who criticised the lack of scientific evidence to show that intelligence was inherited, before returning to Allen, and then moving on to Plomin in an attempt to use scientific authority epistemically to underwrite and guarantee the claims of Saunders, Murray and Allen.

The case was made that a meritocracy had allowed for class mobility, but now most of the intelligent people were in the higher positions in society, and so the lack of social mobility more recently was not due to failures in equality of opportunity, but a lack of intrinsic ability in those remaining in the working class and lower middle class. Young then considered whether technology could be used by parents to enhance their offspring’s intelligence to make them more successful than their parents, with the (threatening objects) of ordinary people demanding the state provide this for them for free. Whether the future would be stable or not was left open, thus inviting the conclusion from those who supported his ideas that the state needed to be a strong law and order state, to tackle threats from a potential genetically inferior ‘enemy within’.

While Toby Young sees himself as a laissez-faire liberal, his position on meritocracy and class mobility is really post-liberal, in the sense that the core liberal principle of equality of opportunity becomes redundant, given that the hierarchy of wealth ends up reflecting a hierarchy in nature (the most innately intelligent at top) and a hierarchy in nurture based on nature (with the cognitive elite creating the best environment for the child to develop).

One question raised by this, is do the more intelligent deserve more money? For Sayer (2005), the answer is no because they benefit by realising their ability in meaningful work. For Allen, the answer was yes. She was clear, contrary to Conservative rhetoric about ‘strivers’, that successful people had not ‘worked harder’ than others (a claim hard to support anyway, given the pressures on many working people), but that their natural ability made them entitled to rewards. A meritocracy could thus exist without meaningful class mobility where those at the ‘top’ deserved economic fortunes and power not for being ‘strivers’ but for being naturally superior, with the question then becoming, how to deal with the losers defined by their lack. Yet all of this rested on a non sequitur, which is surprising given Allen’s claim to be cognitively superior to the majority of people. If it were the case that some were significantly more intelligent than others, with this being passed to their children, then there is no logical motivation to conclude that such people morally and legally deserve large houses, private education, expensive cars, pensions to make retirement more comfortable than most, etc., while others starve after being sanctioned because a bus broke down. To say ‘I have above average intelligence’ does not logically entail the conclusion ‘therefore I deserve more material rewards’.

One could try to argue that the more intelligent need a motivation to apply their intelligence but this trades on a theory of human nature as acquisitive, which is speculative and contingent on the rise of capitalism. It also overlooks the problem that given the choice of meaningful work or meaningless routine work for 40-50 years on the same pay many, I suspect, would opt for the former. We also need to question the use of science here. Popper (1959, 1963) argued that science is fallible and induction entails logical problems, so seeking to establish a claim to scientific certainty by talking of current scientific studies supporting a view (a position which James contests), is in itself erroneous.

Popper rejected all appeals to epistemic authority holding that there were no institutional sources of authority and no inner sources of authority such as the ‘authority of the senses’ for empiricism. Theories could be corroborated but never justified or verified, and all theories needed to be open to critical dialogue, so that they could be changed. If a self-defining elite defined a theory which they based their claim to superiority on as certain, and defined others as cognitively inferior and thus not worth entering dialogue with, science, politics and ethics would all go into major decline for Popper. We would have a post-liberal closed society.

A naturalised hierarchy would justify a plutocratic meritocracy with no class mobility and define the majority as useless and threatening objects, in place of an affective hierarchy where a plutocracy operates with people being told to define themselves as happy winners, unless they are unemployed. The former could well prompt discord but it would be probable, I imagine, that another affective hierarchy would be mobilised, which focused on nationalism rather than class. People could be told to accept their natural superiors and hate the threatening objects from different countries. One of the tabloids Toby Young writes for put a lot of effort into demonising migrants and continually demanding a xenophobic nationalism from its readers.

Critical Pedagogy contra Meritocracy

Calls for more of a meritocracy to make society ‘fairer’ are popular in politics and the press but misguided for two reasons. First, the concept of meritocracy is used to legitimise liberalism, when the reality is one of the rich getting richer with the notion of equality of opportunity being out of kilter with reality in capitalism (Harvey 2005). Second, appeals to meritocracy are used to either support an affective hierarchy, or to support a naturalised hierarchy. With the former, people are reified to see themselves as happy successful objects. On the one hand, their worth is to be derived by them and others from their ability to serve the economy as an end in itself. On the other hand, the economy is presented as a means to serve individuals, with meritocratic competition rewarding the most able. Those with merit are to see themselves as winners and to despise losers as losers and as people letting the economy down. The tension is obscured by the state, presenting itself as drawing its ethical authority from serving the national economy, which focuses on individual reward for winners, with national economic gain being a by-product of this, and collective punishment for losers, with stigmatising language and punitive and sadistic policies. Winners though lose their individuality to become objects of use to the national economy, with this requiring them to obey the injunction to be happy to be of greater use to the national economy, and losers lose their individuality to become despised objects of hate, to be punished irrespective of their individual complex histories. With the turn to a naturalised hierarchy, nationalism would be intensified, with the likely construction of a xenophobic nationalism.

This is not to support an early Frankfurt School pessimism, as espoused by Adorno and Horkheimer, for the process of reification is not totalising. People are protesting because of hardship and unfairness to others, in the UK and in the US against Trump, for instance. What I will argue here is that critical pedagogy offers a way out of the meritocratic morass. Freire’s (1993) work on critical pedagogy applies to all forms of education, from schools, colleges and universities to political education. Freire saw education as intrinsically political, not just in terms of the content, but in terms of the structure too. He famously rejected what he termed the ‘banking approach’ to education, which was where an authority figure deposited discrete pieces of information in passive learners. The consequence of such an approach was to reinscribe existing hierarchies based on claims to authority. Freire argues that:

the teacher confuses the authority of knowledge with his or her own professional authority, which she and he sets in opposition to the freedom of the students [… and] the teacher is the Subject of the learning process while the students are the mere objects. […] The capability of banking education to minimize or annul the students’ creative power and to stimulate their credulity serves the interests of the oppressors, who care neither to have the world revealed nor to see it transformed (1993, 54).

The banking approach thus entails reification with learners being defined as—and self-defining as—passive objects. These objects are only of value when accepting and serving subjects, who have agency and authority. This didactic hierarchy then serves to legitimise the existing social and political hierarchies, because it trains people to define themselves as passive objects whose only worth is defined in relation to serving authority. As Freire puts it:

More and more the oppressors are using science and technology as unquestionably powerful instruments for their purpose: the maintenance of the oppressive order through manipulation and repression. The oppressed, as objects, as ‘things’, have no purposes except those their oppressors prescribe for them (1993, 42).

The outcome of the banking approach to education is that people’s minds become ‘colonised’ by the oppressors.

Here we can say that attempts to realise meritocracy, where some people from the working class are allowed to move into middle class positions, will entail the banking conception with private and state education being based on pupils accepting the authority of the teacher because of their institutional position. The rise in audit culture as a government controlled proxy for market signals with neoliberal interventionism, where the state constructs and controls the market (Cruickshank 2016; Mirowski 2011; Van Horn and Mirowski 2009), exacerbated this problem. For this means that teachers had to teach to the test, ensuring pupils remember and regurgitate factoids that are then forgotten. Education does not encourage a love of learning and a way to develop oneself but turns pupils into industrial objects processing words to get a number on a piece of paper. Seeking a meritocracy in such circumstance would just entail colonised objects moving on to assume positions in the middle classes where they remain colonised and where they act to help colonise those below them, issuing orders for people perceived as objects below them.

Citing Fromm, Freire argues that the oppressor consciousness can only understand itself through possession and it needs to possess other people as objects to not ‘lose contact with the world’ (1993, 40). In this, those colonised and rewarded as conforming objects, through ‘meritocracy’, which selects a small number of working class children for middle class jobs, can see others as objects that confirm their status as superior, without realising the whole process dehumanises them. They will feel rewarded as objects unaware of their own reification by feeling affirmed through, if not the possession of others, then at least the control of others as objects.

Against this, Freire argues that people cannot liberate themselves or be liberated by a new leader seeking authority over them, but can be liberated through working with others, to gain subjecthood through a sense of collective agency. Such agency would have to be dialogic, with people learning together and no-one acting as a new coloniser. The banking approach has to be avoided by radicals for its use makes them oppressors.

While schools are characterised by the banking approach to education there is still some scope, despite neoliberal audit culture, for critical dialogic engagement in universities, especially when students and academics work with political groups outside the university, and of course, there is scope for dialogic engagement between groups of lay agents experiencing socio-economic problems. With this approach to learning all people are treated as subjects and not objects so the problem of reification is removed. The structure of education—and pseudo-dialogue—which reduces people to objects by making them passive things acted upon by an elite claiming authority, is rejected for its intrinsically oppressive nature. A horizontal approach to learning, where dialogic subjects learn from and with other dialogic subjects can begin a move to challenge neoliberalism, the power of corporations and the state serving them.

Defining education and employment in terms of meritocratic selection serves to hide the way liberal capitalism entails the rich getting richer and having control over institutional politics. It also, as importantly, undermines radically critical dialogue by not only ‘blaming the victim’ as regards poverty, but defining all as objects, which can preclude the possibility of recognising others as dialogic subjects. The recent call for a naturalised post-liberal hierarchy suggests that the concentration of resources and opportunities in the hands of a few may make such meritocratic legitimising difficult to sustain, with this opening up a possible turn to authoritarianism. Appeals for a meritocracy and the reification entailed by this need replacing by an approach that can foster dialogue between subjects, which requires the rejection of dialogue-stopping appeals to sources of authority that ultimately entail reification.


Britton, Jack, Lorraine Dearden, Neil Sheppard, and Anna Vignoles. ‘What and Where You Study Matter for Graduate Earnings—but so does Parental Wealth.’ Institute for Fiscal Studies. 13 April 2016. [Accessed 13/06/ 2016].

Brown, Wendy. Undoing the Demos: Neoliberalism’s Stealth Revolution. Brooklyn (NY.): Zone Books, 2015.

Bruff, Ian. ‘The Rise of Authoritarian Neoliberalism’, Rethinking Marxism 26 no. 1 (2014): 113-129.

Cheng, Jacqueline. H. S. and Marsh, Herbert. W. ‘National Student Survey are Differences between Universities and Courses Reliable and Meaningful?’, Oxford Review of Education 36 no. 6 (2010): 693-712.

Collini, Stefan. What are Universities for? London: Penguin, 2012.

Collini, Stefan. Speaking of Universities. London: Verso, 2017.

Cruickshank, Justin. ‘Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practices’. Social Epistemology 29, no. 1 (2015): 73-94.

Cruickshank, Justin. ‘Putting Business at the Heart of Higher Education: On Neoliberal Interventionism and Audit Culture in UK Universities’, Open Library Of Humanities (special issue: ‘The Abolition Of The University’), edited by L. Dear (Glasgow) and M. Eve (Birkbeck), 2 no. 1 (2016): 1-33. Accessed 21/04/2017.

Cruickshank, Justin and Raphael Sassower. Democratic Problem-Solving: Dialogues in Social Epistemology. London: Rowman and Littlefield International, 2017.

Davies, William. The Limits of Neoliberalism: Authority, Sovereignty and the Logic of Competition. London: Sage, 2014.

Davies, William. The Happiness Industry: how the Government and Big Business sold us well-being. London: Bloomsbury, 2015.

Davies, William. ‘The New Neoliberalism’, New Left Review 101 series II (2016): 121-134.

Friedman, Gerald. ‘Workers without Employers: Shadow Corporations and the Rise of the Gig Economy.’ Review of Keynesian Economics 2 no. 2 (2014): 171-188.

Freire, Paulo. Pedagogy of the Oppressed. London: Penguin, 1993 [1970].

Hall, Stuart. ‘The Great Moving Right Show’. In The Politics of Thatcherism, edited by Stuart Hall and Martin Jacques, 19-39. London: Lawrence and Wishart, 1983.

Harvey, David. A Brief History of Neoliberalism. Oxford: Oxford University Press, 2005.

Hennig, Benjamin D. and Danny Dorling. “The EU Referendum.” Political Insight 7, no. 2 (2016): 20-21. Accessed 18/09/2016.

Higher Education Funding Council for England (HEFCE). 2001. ‘Report 01/55: Information on Quality and Standards in Teaching and Learning.’ [Accessed 12/12/2015].

Holmwood, John. ‘The Idea of a Public University’, in A Manifesto for the Public University, edited by John Holmwood, 12-26. London: Bloomsbury, 2011.

Holmwood, John. ‘Slouching Toward the Market: The New Green Paper for Higher Education Part 1.’ Campaign for the Public University. 8 Nov. 2015a. [Accessed 01/12/2015].

Holmwood, John. ‘Slouching Toward the Market: The New Green Paper for Higher Education Part 2.’ Campaign for the Public University. 8 Nov. 2015b. [Accessed 01/12/2015].

Holmwood, John, Tom Hickey, Rachel Cohen, and Sean Wallis,  (eds). ‘The Alternative White Paper for Higher Education. In Defence of Public Higher Education: Knowledge for a Successful Society. A Response to “Success as a Knowledge Economy”, BIS (2016)’. London: Convention for Higher Education, 2016.

MacDonald, Robert and Julie Marsh. Disconnected Youth? Growing up in Britain’s Poor Neighbourhoods. Basingstoke: Palgrave, 2005.

McGettigan, Andrew. The Great University Gamble: Money, Markets and the Future of Higher Education. London: Pluto, 2013.

Mirowski, Philip. Science-Mart: Privatizing American Science. Cambridge and London: Harvard University Press, 2011.

Murray, Charles. Losing Ground: American Social Policy 1950-1980. New York: Basic Books, 1984.

Murray, Charles. The Emerging British Underclass (Studies in Welfare Series no. 2). London: Health and Welfare Unit, Institute of Economic Affairs, 1990.

Murray, Charles and Herrnstein, Richard, J. The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press, 1994.

Popper, Karl. R. The Logic of Scientific Discovery. New York: Harper & Row, 1959 [1934].

Popper, Karl. R. Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge, 1963.

Preston, Barbara. ‘State School Kids do Better at Uni’, The Conversation 16 July 2014. [Accessed 27/04/2017].

Sayer, Andrew. The Moral Significance of Class. Cambridge: Cambridge University Press, 2005.

Van Horn, Robert and Philip Mirowski. ‘The Rise of the Chicago School of Economics and the Birth of Neoliberalism.’ In The Road from Mont Pelerin: The Making of the Neoliberal Thought Collective, edited by Philip Mirowski and Dieter Plehwe, 139-187. London: Harvard University Press. 2009.

Weaver, Matthew. ‘Poshness Tests’ Block Working Class Applicants at Top Companies.’ Guardian 15 June 2015. [Accessed 10/03/2016].

Wilson, William, J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy. Chicago: Chicago University Press, 1990.

Young, Michael. The Rise of the Meritocracy. Piscataway (NJ.): Transaction Publishers, 1994 [1958].

Young, Michael. ‘Down with Meritocracy’, Guardian 29 June 2001. [Accessed 02/05/17].

Author Information: Steve Fuller, University of Warwick,

Fuller, Steve. “Counterfactuals in the White House:  A Glimpse into Our Post-Truth Times.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: OZinOH, via flickr

May Day 2017 was filled with reporting and debating over a set of comments that US President Trump made while visiting Andrew Jackson’s mansion, the ‘Hermitage’, now a tourist attraction in Nashville, Tennessee. Trump said that had Jackson been deployed, he could have averted the US Civil War. Since Jackson had died about fifteen years before the war started, Trump was clearly making a counterfactual claim. However, it is an interesting claim—not least for its responses, which were fast and furious. They speak to the nature of our times.  Let me start with the academic response and then move to how I think about the matter. A helpful compendium of the responses is here.

Jim Grossman of the American Historical Association spoke for all by claiming that Trump ‘is starting from the wrong premise’. Presumably, Grossman means that the Civil War was inevitable because slavery is so bad that a war over it was inevitable. However well he meant this comment, it feeds into the anti-expert attitude of our post-truth era. Grossman seems to disallow Trump from imagining that preserving the American union was more important than the end of slavery—even though that was exactly how the issue was framed to most Americans 150 years ago. Scholarship is of course mainly about explaining why things happened the way they did. However, there is a temptation to conclude that it necessarily had to happen that way. Today’s post-truth culture attempts to curb this tendency. In any case, once the counterfactual door is open to other possible futures, historical expertise becomes more contestable, perhaps even democratised. The result may be that even when non-experts reach the same conclusion as the experts, it may be for importantly different reasons.

Who was Andrew Jackson?

Andrew Jackson is normally regarded as one of the greatest US presidents, whose face is regularly seen on the twenty-dollar banknote. He was the seventh president and the first one who was truly ‘self-made’ in the sense that he was not well educated, let alone oriented towards Europe in his tastes, as had been his six predecessors. It would not be unfair to say that he was the first President who saw a clear difference between being American and being European. In this respect, his self-understanding was rather like that of the heroes of Latin American independence. He was also given to an impulsive manner of public speech, not so different from the current occupant of the Oval Office.

Jackson volunteered at age thirteen to fight in the War of Independence from Britain, which was the first of many times when he was ready to fight for his emerging nation. Over the past fifty years much attention has been paid to his decimation of native American populations at various points in his career, both military and presidential, as well as his support for slavery. (Howard Zinn was largely responsible, at least at a popular level, for this recent shift in focus.) To make a long and complicated story short, Jackson was rather consistent in acting in ways that served to consolidate American national identity, even if that meant sacrificing the interests of various groups at various times—groups that arguably never recovered from the losses inflicted on them.

Perhaps Jackson’s most lasting positive legacy has been the current two-party—Democratic/Republican—political structure. Each party cuts across class lines and geographical regions. This achievement is now easy to underestimate—as the Democratic Party is now ruing. The US founding fathers were polarized about the direction that the fledgling nation should take, precisely along these divides. The struggles began in Washington’s first administration between his treasury minister Alexander Hamilton and his foreign minister Thomas Jefferson—and they persisted. Both Hamilton and Jefferson oriented themselves to Europe, Hamilton more in terms of what to imitate and Jefferson in terms of what to avoid. Jackson effectively performed a Gestalt switch, in which Europe was no longer the frame of reference for defining American domestic and foreign policy.

Enter Trump

Now enter Donald Trump, who says Jackson could have averted the Civil War, which by all counts was one of the bloodiest in US history, with an estimated two million lives in total lost. Jackson was clearly a unionist but also clearly a slaveholder. So one imagines that Jackson would have preserved the union by allowing slaveholding, perhaps in terms of some version of the ‘states rights’ or ‘popular sovereignty’ doctrine, which gives states discretion over how they deal with economic matters. It’s not unreasonable that Jackson could have pulled that off, especially because the economic arguments for allowing slavery were stronger back then than they are now normally remembered.

The Nobel Prize winning economic historian Robert Fogel explored this point quite thoroughly more than forty years ago in his controversial Time on the Cross. It is not a perfect work, and its academic criticism is quite instructive about how one might improve exploring a counterfactual world in which slavery would have persisted in the US until it was no longer economically viable. Unfortunately, the politically sensitive nature of the book’s content has discouraged any follow-up. When I first read Fogel, I concluded that over time the price of slaves would come to approximate that of free labour considered over a worker’s lifetime. In other words, a slave economy would evolve into a capitalist economy without violence in the interim. Slaveholders would simply respond to changing market conditions. So, the moral question is whether it would have made sense to extend slavery over a few years before it would end up merging with what the capitalist world took to be an acceptable way of being, namely, wage labour. Fogel added ballast to his argument by observing that slaves tend to live longer and healthier lives than freed Blacks.

Moreover, Fogel’s counterfactual was not fanciful. Some version of the states rights doctrine was the dominant sentiment in the US prior to the Civil War. However, there were many different versions of the doctrine which could not rally around a common spokesperson. This allowed the clear unitary voice for abolition emanating from the Christian dissenter community in the Northern states to exert enormous force, not least on the sympathetic and ambitious country lawyer, Abraham Lincoln, who became their somewhat unlikely champion. Thus, 1860 saw a Republican Party united around Lincoln fend off three Democrat opponents in the general election.

None of this is to deny that Lincoln was right in what he did. I would have acted similarly. Moreover, he probably did not anticipate just how bloody the Civil War would turn out to be—and the lasting scars it would leave on the American psyche. But the question on the table is not whether the Civil War was a fair price to pay to end slavery. Rather, the question is whether the Civil War could have been avoided—and, more to the point of Trump’s claim, whether Jackson would have been the man to do it. The answer is perhaps yes. The price would have been that slavery would have been extended for a certain period before it became economically unviable for the slaveholders.

It is worth observing that Fogel’s main target seemed to be Marxists who argued that slavery made no economic sense and that it persisted in the US only because of racist ideology.  Fogel’s response was that slaveholders probably were racist, but such a de facto racist economic regime would not have persisted as long as it did, had both sides not benefitted from the arrangement. In other words, the success of the anti-slavery campaign was largely about the triumph of aspirational ideas over actual economic conditions. If anything, its success testifies to the level of risk that abolitionists were willing to assume on behalf of American society for the emancipation of slaves. Alexis de Tocqueville was only the most famous of foreign US commentators to notice this at the time. Abolitionists were the proactionaries of their day with regard to risk. And this is how we should honour them now.

Author Information: Steve Fuller, University of Warwick,

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).


Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Steve Fuller, University of Warwick,

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the University of Warwick. He is the author of more than twenty books, the next of which is Post-Truth: Knowledge as a Power Game (Anthem).


Note: This article originally appeared in the EASST Review 36(1) April 2017 and is republished below with the permission of the editors.

Image credit: Hans Luthart, via flickr

STS talks the talk without ever quite walking the walk. Case in point: post-truth, the offspring that the field has been always trying to disown, not least in the latest editorial of Social Studies of Science (Sismondo 2017). Yet STS can be fairly credited with having both routinized in its own research practice and set loose on the general public—if not outright invented—at least four common post-truth tropes:

1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.

2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.

3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.

4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties.

What is perhaps most puzzling from a strictly epistemological standpoint is that STS recoils from these tropes whenever such politically undesirable elements as climate change deniers or creationists appropriate them effectively for their own purposes. Normally, that would be considered ‘independent corroboration’ of the tropes’ validity, as these undesirables demonstrate that one need not be a politically correct STS practitioner to wield the tropes effectively. It is almost as if STS practitioners have forgotten the difference between the contexts of discovery and justification in the philosophy of science. The undesirables are actually helping STS by showing the robustness of its core insights as people who otherwise overlap little with the normative orientation of most STS practitioners turn them to what they regard as good effect (Fuller 2016).

Of course, STSers are free to contest any individual or group that they find politically undesirable—but on political, not methodological grounds. We should not be quick to fault undesirables for ‘misusing’ our insights, let alone apologize for, self-censor or otherwise restrict our own application of these insights, which lay at the heart of Latour’s (2004) notorious mea culpa. On the contrary, we should defer to Oscar Wilde and admit that imitation is the sincerest form of flattery. STS has enabled the undesirables to raise their game, and if STSers are too timid to function as partisans in their own right, they could try to help the desirables raise their game in response.

Take the ongoing debates surrounding the teaching of evolution in the US. The fact that intelligent design theorists are not as easily defeated on scientific grounds as young earth creationists means that when their Darwinist opponents leverage their epistemic authority on the former as if they were the latter, the politics of the situation becomes naked. Unlike previous creationist cases, the judgement in Kitzmiller v. Dover Area School Board (in which I served as an expert witness for the defence) dispensed with the niceties of the philosophy of science and resorted to the brute sociological fact that most evolutionists do not consider intelligent design theory science. That was enough for the Darwinists to win the battle, but will it win them the war? Those who have followed the ‘evolution’ of creationism into intelligent design might conclude that Darwinists act in bad faith by not taking seriously that intelligent design theorists are trying to play by the Darwinists’ rules. Indeed, more than ten years after Kitzmiller, there is little evidence that Americans are any friendlier to Darwin than they were before the trial. And with Trump in the White House…?

Thus, I find it strange that in his editorial on post-truth, Sismondo extols the virtues of someone who seems completely at odds with the STS sensibility, namely, Naomi Oreskes, the Harvard science historian turned scientific establishment publicist. A signature trope of her work is the pronounced asymmetry between the natural emergence of a scientific consensus and the artificial attempts to create scientific controversy (e.g. Oreskes and Conway 2011). It is precisely this ‘no science before its time’ sensibility that STS has been spending the last half-century trying to oppose. Even if Oreskes’ political preferences tick all the right boxes from the standpoint of most STSers, she has methodologically cheated by presuming that the ‘truth’ of some matter of public concern most likely lies with what most scientific experts think at a given time. Indeed, Sismondo’s passive aggressive agonizing comes from his having to reconcile his intuitive agreement with Oreskes and the contrary thrust of most STS research.

This example speaks to the larger issue addressed by post-truth, namely, distrust in expertise, to which STS has undoubtedly contributed by circumscribing the prerogatives of expertise. Sismondo fails to see that even politically mild-mannered STSers like Harry Collins and Sheila Jasanoff do this in their work. Collins is mainly interested in expertise as a form of knowledge that other experts recognize as that form of knowledge, while Jasanoff is clear that the price that experts pay for providing trusted input to policy is that they do not engage in imperial overreach. Neither position approximates the much more authoritative role that Oreskes would like to see scientific expertise play in policy making. From an STS standpoint, those who share Oreskes’ normative orientation to expertise should consider how to improve science’s public relations, including proposals for how scientists might be socially and materially bound to the outcomes of policy decisions taken on the basis of their advice.

When I say that STS has forced both established and less than established scientists to ‘raise their game’, I am alluding to what may turn out to be STS’s most lasting contribution to the general intellectual landscape, namely, to think about science as literally a game—perhaps the biggest game in town. Consider football, where matches typically take place between teams with divergent resources and track records. Of course, the team with the better resources and track record is favoured to win, but sometimes it loses and that lone event can destabilise the team’s confidence, resulting in further losses and even defections. Each match is considered a free space where for ninety minutes the two teams are presumed to be equal, notwithstanding their vastly different histories. Francis Bacon’s ideal of the ‘crucial experiment’, so eagerly adopted by Karl Popper, relates to this sensibility as definitive of the scientific attitude. And STS’s ‘social constructivism’ simply generalizes this attitude from the lab to the world. Were STS to embrace its own sensibility much more wholeheartedly, it would finally walk the walk.


Fuller, Steve. ‘Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.’ Social Epistemology Review and Reply Collective December, 2016:

Latour, Bruno. ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.’ Critical Inquiry 30, no. 2 (2004) : 225–248.

Oreskes, Naomi and Erik M. Conway Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2011.

Sismondo, Sergio. ‘Post-Truth?’ Social Studies of Science 47, no. 1 (2017): 3-6.

Author Information: David Bakhurst and Sergio Sismondo, Queen’s University at Kingston,;

Bakhurst, David and Sergio Sismondo. “Commentary on Ilya Kasavin’s ‘Towards a Social Philosophy of Science: Russian Prospects’.” Social Epistemology Review and Reply Collective 6, no. 4 (2017): 20-23.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Коля Саныч, via flickr

Ilya Kasavin’s paper[1] argues for a renewed conception of the philosophy of science. He laments what he sees as the present division of labour, which gives philosophy of science responsibility for the logical and methodological analysis of scientific knowledge, while the history, sociology and psychology of science are conceived as separate domains of enquiry, each with its distinct subject matter. 

Kasavin’s Project

Kasavin proposes a more holistic vision inspired by a range of Russian thinkers—including Hessen, Shpet, Vygotsky, Bakhtin, Ilyenkov, Fedorov and Vernadsky—who all offer profoundly holistic views that seek to transcend familiar oppositions between mind and world, individual and social, nature and culture. The Russian tradition yields “a more realistic image of knowledge as a complex, self-developing, human-dimensional system that can be separated from its context only by abstraction” [p. 6; translation corrected—D.B.]. Thus we cannot put the study of the philosophy, history, sociology and psychology of science into different silos. They need to be properly integrated, and when they are, philosophical insight will both inform and issue from the study of science in its various dimensions.

To illustrate his position, Kasavin invites us to consider “megaprojects”, which, in contrast to the historical and sociological case studies so characteristic of contemporary Science and Technology Studies (STS), are endeavours of such “technical complexity and political-economic significance”, that they cannot be understood without a philosophical vision. He takes as his example the building of the Kara-Kum Canal in the Stalin era, a project that, though its primary purpose was the irrigation of desert lands, had its origin, Kasavin argues, in Peter the Great’s ambition to construct a water transportation route that would unite northern and southern Russia and open up further routes to Persia, India and China. Such massive undertakings cannot be treated as if they are merely scaled up versions of smaller engineering projects. On the contrary, they present distinctive problems of explanation and analysis, and carry within them profound philosophical significance that any attempt to understand them must bring into view.

This is even more true of what Kasavin calls “global projects”, such as Isabella of Castile’s sending Columbus on his voyage of discovery, a project of truly world-historical significance that “intertwines science and everyday life, traditions and innovations, history and geography, the spontaneous inhomogeneity and constructive purposefulness of development, national mentality and the spirit of an epoch” (12). Any hope of understanding such phenomena requires more than a multi-disciplinary collaboration. It demands a “transdisciplinary reorientation” animated by the right philosophical sensibility—creative, open and holistic.

Unity? But What Unity?

How plausible is Kasavin’s optimism that the requisite philosophical sensibility is to be found in the Russian tradition? The difficulty here is that while it is relatively easy to say what the many and various Russian thinkers he presents jointly dislike, it is far harder to articulate a positive vision that they share. As Kasavin brings out, they all dismiss representationalist conceptions of mind and correspondentist theories of truth; reject scientism; distrust views that are sceptical of human creativity, and disdain those that fail to countenance the fundamentally social character of mind. It would be wrong, however, to suppose that anything like a common philosophy emerges from their work. Russian cosmism, for example, is a million miles from Ilyenkov’s Marxism. It is true, of course, that all these thinkers (with the probable exception of Bakhtin) look to philosophy for a unifying vision and represent knowledge as a oneness with reality achievable by individuals only in community with others. But there is little unity in their respective ways of developing such insights.

Kasavin invokes the distinctively Russian notion of “integral knowledge” as a unifying theme, representing it as introduced at the turn of the 20th century by a number of Russian thinkers, including Shpet and Solovyev, and subsequently taken up by Vygotsky and Bakhtin. But the notion of “integral knowledge” actually derives from the 1850s and the work of Ivan Kireevsky, one of the key figures of the Slavophile movement, and while it found various expressions in the ideas of later thinkers, it is hard to liberate it entirely from its original associations with Orthodoxy, the Russian Soul, and the transcendence of reason. These are not ideas usually associated with Vygotsky or Bakhtin, let alone Ilyenkov. We do not doubt that there is much in the Russian tradition that could contribute to the revitalization of philosophical conceptions of science, but there remains a good deal of work to be done to make this explicit.

Case Studies 

Kasavin is concerned that STS is overly focused on case studies that only rarely make philosophical contributions. Of course, it is implicit in the idea of a case study—as opposed to a study of purely antiquarian interest—that it illuminates something larger than itself: it should provide a case of something general, abstract or fundamental. Whether STS’s case studies make philosophical points will depend in part on the boundaries of philosophy, although certainly STS has helped to reshape ideas of such things as scientific argumentation and objectivity, of relations between theory and experiment, and of the application of science, all of which are important to the philosophy of science and technology.

One of the effects of ethnographic and historical case studies in STS has been to show how philosophy has often relied on idealized visions of science and technology that line up poorly with science as it is actually practiced. Philosophers have often based their views on textbook or other whiggish accounts of scientific practice, accounts that tend to draw scientific beliefs toward presently accepted truths. We might see this in terms of a kind of distance between philosophical views and actual practice. STS has replaced whiggish accounts by relentlessly constructivist ones: STS looks to how things are constructed from the ground up. The concrete details of materials, actions and representations matter to scientific and technological constructions.


One of the risks of doing empirical studies is that they may not turn out to be of any larger significance. To guard against that, Kasavin, as we observed above, turns to what he considers an empirical topic of intrinsic significance, a megaproject.

Construction on the Kara-Kum Canal, running from the Amu Dar’ya River across the Kara-Kum Desert, began in 1954 under Stalin and was completed in 1988. As Kasavin describes, the canal was one of the largest engineering projects undertaken by the Soviet Union, is one of the longest waterways of the world, permitted the irrigation of what could become valuable agricultural land, and led to extensive development in Turkmenistan.  Kasavin argues that the real origins of the canal lie in the era of Peter the Great, who in the early years of the eighteenth century saw commercial and political possibilities in the creation of a navigable waterway through the Kara-Kum Desert. The canal would form an important leg in the passage from the heart of Russia to India. Although Peter did not progress beyond preliminary surveying of the possible canal bed and building a few necessary political alliances, Kasavin suggests that the idea remained alive through the eighteenth and nineteenth centuries, an element of a general Russian enthusiasm for hydraulic engineering. Is the implication of the line drawn from Peter the Great’s Kara-Kum Canal to Stalin’s Kara-Kum Canal that megaprojects like these can have lives of their own?

For Kasavin, we should not assume that megaprojects are the products of economic opportunities or political circumstances. The Kara-Kum Canal did not depend on a calculation of costs and benefits, but instead issued from acts of will: first Peter’s, who did not have the power to bring it into being, and then Stalin’s, who did. Here lies a kind of romanticism in Kasavin’s account, which fuels his impatience with merely technocratic approaches to megaprojects (exemplified in his article by the Danish authors who attempt to address the anarchic tendencies of megaprojects by deciding how best to budget, plan and execute them).  What needs to be understood, is that, while born of pure will, the Kara-Kum Canal was built by workers who, in Trifonov’s image, were doing a kind of practical philosophy as they reshaped space and time, a kind of embodied metaphysics. Nature was mastered and transformed to human ends, most immediately the ends of Soviet society. The result is something of almost unbelievable grandeur and gravitas, producing experiences of what David Nye (following Perry Miller) calls the “technological sublime”, in which the individual human agent is dwarfed by the scale of megaprojects as the social giant unleashes its Promethean aspirations to reshape nature to human ends.

Yet to support Kasavin’s picture we surely need to study the details of how the Kara-Kum Canal and other megaprojects are actually realized.  Kasavin offers us a unifying vision, but it yields an a priori narrative, illustrated by literary texts (Platonov, Trifonov) rather than close study of historical detail. STS’s current, and very different, sensibility would suggest a need to drill down into the details of megaprojects to understand how they are made, how they work and don’t work, and how they are understood. What traces and records were left of the project imagined by Peter the Great, how were they interpreted and reinterpreted over the course of hundreds of years, and how, if at all, did they influence Stalin’s project? In what sense are these two projects connected? Planning the canal was begun under Stalin—and it is certainly plausible that the canal arose out of his force of will—but the digging, blasting and pouring of cement did not begin until after Stalin’s death, and continued for more than thirty years before the project was complete. Why did Stalin’s canal not suffer the fate of Peter’s? What important decisions, obstacles and compromises gave the canal its eventual shape?

No doubt it is only the kind of philosophical vision that Kasavin applauds that draws us to the subject matter about which we ask these questions, but it is only by attention to empirical detail that we stand a chance of answering them. But it is precisely the kind of case studies favoured in contemporary STS that have taught us a lot about the profundity and complexity of the empirical study of science and technology. We should not scorn that legacy, as Kasavin sometimes seems to, and embrace instead a diet of speculative narratives and a priori reflections, but find a way to ensure that a due appreciation of philosophical richness of our subject matter informs our efforts to bring out its empirical reality in all its depth and richness. That, we contend, is the guiding principle that must inform any attempt to rethink the nature of case studies or the role of philosophy in contemporary studies of science.

[1] Kasavin, Ilya. “Towards a Social Philosophy of Science: Russian Prospects.” Social Epistemology 31, no. 1 (2017): 1-15.

The following are a set of questions concerning the place of transhumanism in the Western philosophical tradition that Robert Frodeman’s Philosophy 5250 class at the University of North Texas posed to Steve Fuller, who met with the class via Skype on 11 April 2017.


Image credit: Joan Sorolla, via flickr

1. First a point of clarification: we should understand you not as a health span increaser, but rather as interested in infinity, or in some sense in man becoming a god? That is, H+ is a theological rather than practical question for you?

Yes, that’s right. I differ from most transhumanists in stressing that short term sacrifice—namely, in the form of risky experimentation and self-experimentation—is a price that will probably need to be paid if the long-term aims of transhumanism are to be realized. Moreover, once we finally make the breakthrough to extend human life indefinitely, there may be a moral obligation to make room for future generations, which may take the form of sending the old into space or simply encouraging suicide.

2. How do you understand the relationship between AI and transhumanism?

When Julian Huxley coined ‘transhumanism’ in the 1950s, it was mainly about eugenics, the sort of thing that his brother Aldous satirized in Brave New World. The idea was that the transhuman would be a ‘new and improved’ human, not so different from new model car. (Recall that Henry Ford is the founding figure of Brave New World.) However, with the advent of cybernetics, also happening around the same time, the idea that distinctly ‘human’ traits might be instantiated in both carbon and silicon began to be taken seriously, with AI being the major long-term beneficiary of this line of thought. Some transhumanists, notably Ray Kurzweil, find the AI version especially attractive, perhaps because it caters to their ‘gnostic’ impulse to have the human escape all material constraints. In the transhumanist jargon, this is called ‘morphological freedom’, a sort of secular equivalent of pure spirituality. However, this is to take AI in a somewhat different direction from its founders in the era of cybernetics, which was about creating intelligent machines from silicon, not about transferring carbon-based intelligence into silicon form.

3. How seriously do you take talk (by Bill Gates and others) that AI is an existential risk?

Not very seriously— at least on its own terms. By the time some superintelligent machine might pose a genuine threat to what we now regard as the human condition, the difference between human and non-human will have been blurred, mainly via cyborg identities of the sort that Stephen Hawking might end up being seen as having been a trailblazer. Whatever political questions would arise concerning AI at that point would likely divide humanity itself profoundly and not be a simple ‘them versus us’ scenario. It would be closer to the Cold War choice of Communism vs Capitalism. But honestly, I think all this ‘existential risk’ stuff gets its legs from genuine concerns about cyberwarfare. But taken on its face, cyberwarfare is nothing more than human-on-human warfare conducted by high tech means. The problem is still mainly with the people fighting the war rather than the algorithms that they program to create these latest weapons of mass destruction. I wonder sometimes whether this fixation on superintelligent machines is simply an indirect way to get humans to become responsible for their own actions—the sort of thing that psychoanalysts used to call ‘displacement behavior’ but the rest of us call ‘ventriloquism’.

4. If, as Socrates claims, to philosophize is to learn how to die, does H+ represent the end of philosophy?

Of course not!  The question of death is just posed differently because even from a transhumanist standpoint, it may be in the best interest of humanity as a whole for individuals to choose death, so as to give future generations a chance to make their mark. Alternatively, and especially if transhumanists are correct that our extended longevity will be accompanied by rude health, then the older and wiser among us —and there is no denying that ‘wisdom’ is an age-related virtue—might spend their later years taking greater risks, precisely because they would be better able to handle the various contingencies. I am thinking that such healthy elderly folk might be best suited to interstellar exploration because of the ultra-high risks involved. Indeed, I could see a future social justice agenda that would require people to demonstrate their entitlement to longevity by documenting the increasing amount of risk that they are willing to absorb.

5. What of Heidegger’s claim that to be an authentic human being we must project our lives onto the horizon of our death?

I couldn’t agree more! Transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection. I think Heidegger and other philosophers have invested such great import on death simply because of its apparent irreversibility. However, if you want to recreate Heidegger’s sense of ‘ultimate concern’ in a post-death world, all you would need to do is to find some irreversible processes and unrecoverable opportunities that even transhumanists acknowledge. A hint is that when transhumanism was itself resurrected in its current form, it was known as ‘extropianism’, suggesting an active resistance to entropy. For transhumanists—very much in the spirit of the original cybernetician, Norbert Wiener—entropy is the ultimate irreversible process and hence ultimate challenge for the movement to overcome.

6. What is your response to Heidegger’s claim that it is in the confrontation with nothingness, in the uncanny, that we are brought back to ourselves?

Well, that certainly explains the phenomenon that roboticists call the ‘uncanny valley’, whereby people are happy to deal with androids until they resemble humans ‘a bit too much’, at which point people are put off. There are two sides to this response—not only that the machines seem too human but also that they are still recognized as machines. So the machines haven’t quite yet fooled us into thinking that they’re one of us. One hypothesis to explain the revulsion is that such androids appear to be like artificially animated dead humans, a bit like Frankenstein. Heideggerians can of course use all this to their advantage to demonstrate that death is the ultimate ‘Other’ to the human condition.

7. Generally, who do you think are the most important thinkers within the philosophic tradition for thinking about the implications of transhumanism?

Most generally, I would say the Platonic tradition, which has been most profound in considering how the same form might be communicated through different media. So when we take seriously the prospect that the ‘human’ may exist in carbon and/or silicon and yet remain human, we are following in Plato’s footsteps. Christianity holds a special place in this line of thought because of the person of Jesus Christ, who is somehow at once human and divine in equal and all respects. The branch of theology called ‘Christology’ is actually dedicated to puzzling over these matters, various solutions to which have become the stuff of science fiction characters and plots. St Augustine originally made the problem of Christ’s identity a problem for all of humanity when he leveraged the Genesis claim that we are created ‘image and the likeness of God’ to invent the concept of ‘will’ to name the faculty of free choice that is common to God and humans. We just exercise our wills much worse than God exercises his, as demonstrated by Adam’s misjudgment which started Original Sin (an Augustinian coinage). When subsequent Christian thinkers have said that ‘the flesh is weak’, they are talking about how humanity’s default biological conditions holds us back from fully realizing our divine potential. Kant acknowledged as much in secular terms when he explicitly defined the autonomy necessary for truly moral action in terms of resisting the various paths of least resistance put before us. These are what Christians originally called ‘temptations’, Kant himself called ‘heteronomy’ and Herbert Marcuse in a truly secular vein would later call ‘desublimation’.

8. One worry that arises from the Transhumanism project (especially about gene editing, growing human organs in animals, etc.) regards the treatment of human enhancement as “commercial products”. In other words, the worry is concerns the (further) commodification of life. Does this concern you? More generally, doesn’t H+ imply a perverse instrumentalization of our being?

My worries about commodification are less to do with the process itself than the fairness of the exchange relations in which the commodities are traded. Influenced by Locke and Nozick, I would draw a strong distinction between alienation and exploitation, which tends to be blurred in the Marxist literature. Transhumanism arguably calls for an alienation of the body from human identity, in the sense that your biological body might be something that you trade for a silicon upgrade, yet you humanity remains intact on both sides of the transaction, at least in terms of formal legal recognition. Historic liberal objections to slavery rested on a perceived inability to do this coherently. Marxism upped the ante by arguing that the same objections applied to wage labor under the sort of capitalism promoted by the classical political economists of his day, who saw themselves as scientific underwriters of the new liberal order emerging in post-feudal Europe. However, the force of Marxist objections rest on alienation being linked to exploitation. In other words, not only am I free to sell my body or labor, but you are also offer whatever price serves to close the sale. However, the sorts of power imbalances which lay behind exploitation can be—and have been—addressed in various ways. Admittedly more work needs to be done, but a time will come when alienation is simply regarded as a radical exercise of freedom—specifically, the freedom to, say, project myself as an avatar in cyberspace or, conversely, convert part of my being to property that can be traded from something that may in turn enhance my being.

9. Robert Nozick paints a possible scenario in Anarchy, State, and Utopia where he describes a “genetic supermarket” where we can choose our genes just as one selects a frozen pizza. Nozick’s scenario implies a world where human characteristics are treated in the way we treat other commercial products. In the Transhuman worldview, is the principle or ultimate value of life commercial?

There is something to that, in the sense that anything that permits discretionary choice will lend itself to commercialization unless the state intervenes—but I believe that the state should intervene and regulate the process. Unfortunately, from a PR standpoint, a hundred years ago that was called ‘eugenics’. Nevertheless, people in the future may need to acquire a license to procreate, and constraints may even be put on the sort of offspring are and are not permissible, and people may even be legally required to undergo periodic forms of medical surveillance—at least as a condition of employment or welfare benefits. (Think Gattaca as a first pass at this world.) It is difficult to see how an advanced democracy that acknowledges already existing persistent inequalities in life-chances could agree to ‘designer babies’ without also imposing the sort of regime that I am suggesting. Would this unduly restrict people’s liberty? Perhaps not, if people will have acquired the more relaxed attitude to alienation, as per my answer to the previous question. However, the elephant in the room—and which I argued in The Proactionary Imperative is more important—is liability. In other words, who is responsible when things go wrong in a regime which encourages people to experiment with risky treatments? This is something that should focus the minds of lawyers and insurers, especially in a world are presumed to be freer per se because they have freer access to information.

10. Is human enhancement consistent with other ways in which people modify their lifestyles, that is, are they analogous in principle to buying a new cell phone, learning a language or working out? Is it a process of acquiring ideas, goods, assets, and experiences that distinguish one person from another, either as an individual or as a member of a community? If not, how is human enhancement different?

‘Human enhancement’, at least as transhumanists understand the phrase, is about ‘morphological freedom’, which I interpret as a form of ultra-alienation. In other words, it’s not simply about people acquiring things, including prosthetic extensions, but also converting themselves to a different form, say, by uploading the contents of one’s brain into a computer. You might say that transhumanism’s sense of ‘human enhancement’ raises the question of whether one can be at once trader and traded in a way that enables the two roles to be maintained indefinitely. Classical political economy seemed to imply this, but Marx denied its ontological possibility.

11. The thrust of 20th Century Western philosophy could be articulated in terms of the strife for possible futures, whether that future be Marxist, Fascist, or other ideologically utopian schemes, and the philosophical fallout of coming to terms with their successes and failures. In our contemporary moment, it appears as if widespread enthusiasm for such futures has disappeared, as the future itself seems as fragmented as our society. H+ is a new, similar effort; but it seems to be a specific evolution of the futurism focused, not on a society, but on the human person (even, specific human persons). Comments?

In terms of how you’ve phrased your question, transhumanism is a recognizably utopian scheme in nearly all respects—including the assumption that everyone would find its proposed future intrinsically attractive, even if people disagree on how or whether it might be achieved. I don’t see transhumanism as so different from capitalism or socialism as pure ideologies in this sense. They all presume their own desirability. This helps to explain why people who don’t agree with the ideology are quickly diagnosed as somehow mentally or morally deficient.

12. A common critique of Heidegger’s thought comes from an ethical turn in Continental philosophy. While Heidegger understands death to the harbinger of meaning, he means specifically and explicitly one’s own death. Levinas, however, maintains that the primary experience of death that does this work is the death of the Other. One’s experience with death comes to one through the death of a loved one, a friend, a known person, or even through the distant reality of a war or famine across the world. In terms of this critique, the question of transhumanism then leads to a socio-ethical concern: if one, using H+ methods, technologies, and enhancements, can significantly inoculate oneself against the threat of death, how ethically (in the Levinasian sense) can one then legitimately live in relation to others in a society, if the threat of the death of the Other no longer provides one the primal experience of the threat of death?

Here I’m closer to Heidegger than Levinas in terms of grounding intuition, but my basic point would be that an understanding of the existence and significance of death is something that can be acquired without undergoing a special sort of experience. Phenomenologically inclined philosophers sometimes seem to assume that a significant experience must happen significantly. But this is not true at all. My main understanding of death as a child came not from people I know dying, but simply from watching the morning news on television and learning about the daily body count from the Vietnam War. That was enough for me to appreciate the gravity of death—even before I started reading the Existentialists.