Archives For Articles

Articles are stand-alone contributions to SERRC.

Author Information: Lee Basham, South Texas College/University of Texas, Rio Grande Valley, labasham@southtexascollege.edu

Basham, Lee. “Border Wall Post Truth: Case Study.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 40-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Eu

Please refer to:

Image credit: Anne McCormack, via flickr

“The more you show concern, the closer he’ll go to the edge … Some things are just too awful to publicize.”—Don Dilillio, White Noise

“History is hard to follow. Luckily, they killed Kennedy. Leaves bread crumbs if we stray.”—Alfonso Uribe

Dogs don’t look Up. The higher tossed the bone, the less likely they are to see it. Lost in a horizontal universe, they run tight circles, wondering, “where is it?”. On its way down it hits them on the head. Civilized primates are surely different. Our steep information hierarchies are different. Or in the high castles of information a few above look upon many circling below.

Far South Texas, a bone’s throw (or gun shot) from the US/Mexican border, enjoys post truth as a storied and comfortable tradition. So stable, we might question the addendum “post”. Here truth is ephemeral. Like rain, it appears rarely. When it does it collects in pools, grows strange stuff, gets smelly and then dries up.

Are we suddenly flung into a post-truth world? The sophists lost that one, the Stalinists, too. But history’s lessons, like a grade 2 curriculum, never end. They remain the same. Hope springs eternal. Adam Riggio, in “Subverting Reality”, takes a personal approach, emphasizing trust before truth, even providing a theory of true punk music; if form then content. All else is appropriation. Meet fake punk. While I’m not sure about that, I’m sympathetic. Perhaps form does not formulate in the end, which is why we should be suspicious of any form-allegiance. Including representational democracy. But his is an understandable approach. Like Riggio, I’ll take a personal line.

In letter to the editor style: I reside in McAllen, Texas. It is in the Rio Grande Valley. Locals call this the “RGV” or “956”.[1] Table chat I’ve shared in the wealthy parlors of Austin and San Antonio insists we are not really part of Texas, “They’re all Mexican”. But the map indicates we are. Because we are on the North side of the river.

A few miles South of town we have a long stretch of the Mexico/US Border. The Wall. It looks like minimalist conceptual art from the 1960s. Donald Judd comes to mind, Donald Trump, too.[2] Professional photographers adore it, prostrate before it. They fly in just to see and click. The border wall is by nature post-trust and so, post-truth. This Post Truth is a concrete condition. Literally. Made of concrete and steel, I’ve climbed it. Took me 1.5 minutes (a bit slower than average; wear tennis shoes, not boots). Recently, epistemologists have explored this scenario. Suspicion is natural to social primate life, not shocking, misplaced or shameful: The battle is not for trust, but realistic, strategic distrust.

Post Truth Life

We are Texas and proud we are. We proudly supply Washington DC with its cocaine, providing the capital the highest quality, best prices, in vast quantities. Our product is legend, a truly international undertaking, spanning 13 countries. This is our number one economic achievement. We proudly provide the largest, most vibrant, corporate retail experience to be found anywhere between San Antonio and the Federal District of Mexico. Our shopping is legend, a truly international undertaking, filling the parking lots with cars from the Mexican states of Tamaulipas, Nuevo Leon, DF, alongside Canadian vehicles from Ontario, Alberta Quebec and others.[3] We are Texas and proud we are. This is our number one economic achievement. As one might imagine, such a list goes on. The local banks reflect our achievement. Billions of dollars beyond the productive abilities of our local legal economy are on deposit. Almost every penny in the banks is owned to the success of our local legal economy. But what I take to be our greatest achievement, which all this and more rests upon, is the borderland mind. In the parlance of the moment, it is deliciously post-trust and post-truth. If this isn’t social epistemology, what is?

I have lived on the border for more than a decade. My wife, originally from Monterrey, Mexico, and her family, have lived here since she was 14, and for several years before that just a few blocks South of the river’s South side. While most academics are Anglo imports and cling to the same, I didn’t make that mistake. Her family and my friends provide an intimate understanding.

Conspiracy theory is the way of life here, much of it well informed. Though truth is rare enough, its seasons are established and understood. The winds that sweep from Mexico into the North whip up some remarkable and telling conspiracy theories. As does the wind from Washington. Escobares, one of the oldest cities in the US, is a short drive West of McAllen. The Church is built of petrified wood. On the Border even the US census is post-truth and seen as such; not just in population count (understandably, it misses half the people),

At the 2010 census the city of Escobares had a population of 1,188. The racial composition of the population was 98.3% white (7.2% non-Hispanic white), 1.6% from some other race and 0.1% from two or more races.

Yet, 92.8% of the population was Hispanic or Latino with 92.3% identifying as being ethnically Mexican.[4]

Escobares is a white town? McAllen has a nearly identical US census profile. Derisive laughter on local radio and in front yard parties follows.

The Wall of Conspiracy

The Wall is patchy, has gaps. Erected by President Obama, many miles here, many miles there, ropes dangle everywhere to help travelers across it. Little kid’s shoes, kicked off as they climb, litter its base. Sometimes the kids fall. The Wall is not monolithic.  Nor opinion. Surprisingly, in an almost entirely Hispanic community, completing The Wall is both opposed and supported by many. Often the same people. This is not insanity, it is time honored strategy. Brings to mind the old movies where people hang two-sided picture frames with opposing photos, and flip the frame according to what a glance out the window informs them about their arriving guests. The photos mean nothing, the flipping, everything. Fireside conversations become remarkable. The anti-wall protests of local politicians are viewed in a familiar post-truth, fading race-war narrative: They have to say that. Both Democrats and Republicans copy cat this story line and then deny any allegiance to it at Rotary club meetings before racially well-mixed and approving audiences. Legal trade is good, the rest is a mess. Why a wall? None of them would do any lucrative illegal business. They pray before their meetings. But Northern cities in Mexico promote ineffective boycotts of McAllen’s retail miracle because of The Wall. They fear it hurts them financially. Odd. The McAllen Mayor responds by stringing a broad, mixed language banner across main street, declaring, “Bienvenidos to McAllen, Always Amigos”. The Wall issue dissolves.

Charades require political tension, sincere or contrived, perhaps a tactic of negotiation.

Why local support for The Wall? Too many headless bodies, too many severed heads. People are sick of the untouchable prostitution trap houses north and east of town. Fenced in, barbed wired, cinder-block buildings with armed guards, stocked with poached immigrant girls and boys, a parking lot full of Ford F150 trucks. The kidnappings of immigrants, the torture chambers and videos when the money never arrives. The ones that by shear luck avoid such fates are relegated to back country depots and “abandoned” houses. Often they are abandoned, forced to burglarize and rob to eat and continue their trek north.

People are also tired of the border’s relentless yet ironically impotent police state. One cannot drive the 57 miles from McAllen Texas to Rio Grande City without passing 20 or more roadside State Troopers in their cartel-black SUVs. Don’t bother to count the border patrol SUVs: They are more numerous. The State Troopers, euphemistically agents of “The Department of Public Safety (DPS)”, fill our now crowded jails with locals, on every imaginable infraction, no matter how trivial. After asking me where I lived, at the end of a convenience store line conversation, one told me, white on white, “Then ya know, people here are bad.” [5] These are not local Sheriffs, born and raised here, who understand people and who is and isn’t a problem. DPS is relentless, setting impromptu road blocks throughout our cities, tossing poor people in “county” for not having car insurance and the money to pay for it on the spot. Whole Facebook pages are devoted to avoiding the road-blocks in 956. Down at McAllen’s airport entire multi-story, brand new hotels are now filled with foreign agents of the state. The whole monster-mash, everyday is Halloween scène down on the border could be chronicled for pages.

All of this is perceived by a hardworking, fun-loving, family-driven community as an ill wind from the South, drawn by the bait-and-switch vacuum of an uncaring, all-consuming “great white north”, and a Washingtonian two-face. Right they are. With The Wall, perhaps these police-state parasites will leave. The slave traps will wither by the rule of no supply. Rich white and agringado activists up North be damned; who for their own, disconnected reasons, demand it never end.[6] To quote a close relative, “Nombre! They don’t live here!”.

People see The Wall as a conspiracy to placate the xenophobes up North, not protect anyone. Keep the cheap labor coming but assert, “We did something to stop it.”. People see The Wall as protection for those who otherwise would cross and fall into the many traps set for them by the coyotes, they also see The Wall as protection for themselves. They see The Wall as a conspiracy supported by the drug cartels and the Mexican government the cartels control (its official protests not withstanding) to simplify the business model, driving the local cells and resident smuggling entrepreneurs out of business. Using operatives in ICE and the Border patrol is more efficient: Cut out the middle women and men. People lament the damage this will do to our local economy and in some cases, personal income. People praise this. People see those who in the North who oppose The Wall as political fodder used by those who could not care less about them, but want to pretend they do without having a clue, or even trying to. People believe The Wall is a conspiracy, not just to keep Hispanics out, which they often despise depending on country (“OTMs”, Other than Mexicans) but to keep Americans in. As I quickly learned, though few border-landers verbally self-identify as “Mexicans” (that takes a trip across the river), they view a dangerous Mexico as safe-haven if things “go south” here in the United States. If a theoretical, grave political or economic crisis occurs, or just a particularly unpleasant but very real legal entanglement, escape to Mexico is their first resort.

People ask, after the finished wall, added concertina wire and all, what if they close the bridges? When they need to run, they want to be able. People see The Wall as an attempt to destroy the Mexican economy, forcing them into the proposed North American Union, where Canada has submitted in principle, and the only hold-outs are the resolute patriots of the Republic of Mexico, “Mexico, so far from God, so close to the United States”.[7] Washington will never be its capital. A noble sentiment. More pedestrian conspiracy theories circulate about campaign contributions from international construction corporations and their local minions. Workers on both sides of the river hope the fix is in; it means jobs for everyone. Recall the Israeli government hired eager Palestinians to build their wall; but that’s another post truth reality. Revealingly, the Israeli example has been promoted in the American press as a model with the notorious phrase, “best practices”. Such is the politics of promised lands.

What is Post Truth?

Post truth is, first, access to a shared, community truth, is now lost. But that would only entail agnosticism. Post truth is more. It is also, second, seemingly contradictory claims now have equal legitimacy in the government, media and with the citizenry. No one looks up. This is an unlikely construct. Like choosing wallpaper, but this time for the mind, what a citizen believes, political, economic or otherwise, is entirely a matter of personal taste. And there is no accounting for taste. No epistemic grounds for ordinary controversy, but insidiously a double-truth theory laid upon the collective consciousness of democratic society. Collective madness. Hence: A post truth world. It’s a catastrophe. Or is it? Look up at the above.  What is epistemically interesting is that most of the conspiratorial stances above do enjoy some significant evidence and are mutually consistent. Hence simultaneously believed by the same persons. Enter real “post truth”, and a larger diagnosis of our information hierarchy. It is not reliable. Instead we look to each other.

Five Suggestions about Post Truth

Post truth is about epistemology, social and otherwise, but only at one or more steps removed. On the ground it is entirely pragmatic. Post truth is not to be confused with mere state propaganda. That is another, much more narrow notion. Post truth, as before defined, is ancient and ubiquitous. The 21st century is no different.

❧ ❧ ❧

1. The first, a bit tiresome to repeat, is found in several epistemic critiques of the pathologizing approach to conspiracy theory: We should not conflate suspicions with beliefs. There is nothing cognitively anomalous about post truth states of consciousness when read this way.[8] Suspicion is epistemically virtuous. The fears surrounding ambitions of pathology, how ever great, are immediately de-sized in face of this simple distinction. Suspicion is one of the virtues of Eric Blair’s famous character, Winston Smith—at least until he trusts and is captured, tortured and turned.

❧ ❧ ❧

2. “Post” implies a time before that has passed. More formally, it might be termed a tense-based situational truth agnosticism.[9] Applied to “trust” and “truth”, on the border, this proposed time before would require reference to the more social and intelligent Pleistocene mammals. Maybe to the first human visitors, ten or more thousand years ago, no doubt in search for water. An attitude of panic towards “post truth” seems misplaced. Nothing can survive laughter. This is a second suggestion. Post truth hysteria is, while initially quite understandable, difficult to take seriously for long. Rage concerning it, even more so.[10]

❧ ❧ ❧

3. Linguists point out that “trust” and “truth” are closely related. One births the other. By accident and so inclination, I am an epistemologist of trust, especially its “negative spaces”, to borrow from art-theory. These spaces in our current information hierarchy, where so few control what so many hear, and often believe, are legion. In our society navigating these is elevated to high art, one we should not fear. My third suggestion is that if nothing changes then nothing changes. And my prediction, nothing changes in a post truth world. Because nothing has changed. Or soon will.

Post trust is not the new normal, it is the oldest one. You don’t know people, or societies, until you go about with them. We should be cautious, watchful. As my son would put it, “We should lurk them hard”. A skeptical attitude, an expectation of post truth because of a post trust attitude, is appropriate, an adult attitude. Among billions of humans of all types and classes, we hardly know anyone. And those who protest this, doth protest too much. Such an attitude of truth-privilege, as found among the denizens of the political Avant-gardes and their fellow travelers in our mass media, has always been unearned.[11] One often betrayed. Professional managers of belief I will grant the mainstream media, professional purveyors of truth is quite a stretch, a needless one. But a conceit that has proven lethal.

Consider the 2003 Iraq invasion. We were told at the time, by both current and prior presidents, it was an invasion for feminism.[12] The media, including the New York Times, chimed in approval. Normalizing this invasion was this media’s crowning achievement of the 21st century’s first decade. One might think they got off on the wrong foot, but that would entirely depend on what the right foot is. I argue for a more functional outlook. Their function is basic societal stability, congruence with official narratives when these are fundamental ones, not truth; an establishment of normality in virtually anything. Truth has its place at their table only among the trivial, not basic stability. Consider the US civil rights movement. Here the political Avant-gardes and mass media had an effect we view as laudable. Yet this did not threaten the established political or capitalist order. It ushered old participants into greater integration within it and to new levels of participation on its behalf. Mr. Obama, for instance.

❧ ❧ ❧

4. Mainstream media and Avant-garde political pronouncements are unreliable in proportion to the importance to the purveyors that we accept them. I don’t mean this as revelatory, rather in the manner of reminder. The opportunities for manipulation loom especially large when popular cultures are involved, and the way we identify with these are transitioned to apathy or atrocities. Or both, simultaneously. This transcends political dichotomies like “right” and “left”. Both, because of their simplicity are easy marks. The proper study is, perhaps, is that of “faction”. A war for feminism? A war to extend democracy? A war for Arab prosperity and against child poverty? A war for American energy independence? A war for the world: Pax Americana? But the ploy worked, both popularly and within academia. It’s being re-wrought today. In the popular and academic hysteria following 9/11, Michael Walzer, champion of Just War Theory, wrote,

Old ideas may not fit the current reality; the war against terrorism to take the most current example, requires international cooperation that is radically undeveloped in theory as it is in practice. We should welcome military officers into the theoretical argument. They will make it a better argument than it would be if no one but professors took interest.[13]

Walzer asks to take his place among the generals. Walzer goes on to argue for the importance of aerial bombing while trying not to blow rather younger children to smithereens. Walzer’s justification? Protecting US soldiers. If any of this strikes us as new or news, we live in what I like to call the united states of amnesia.  He claims current bombing technology overwhelmingly protects the innocent. An interesting post truth formula. Who then are the guilty soldiers and functionaries, and how could they be? Denounce the stray bomb fragments, then embrace the counsel of professional conspirators of death in our moral considerations. This is suspect, politically, morally and epistemically. It is also feminism. That’s a post truth world. Long before a real estate agent joined the pantheon of US presidents.

The rebellion of conspiracy theory helps here. Conspiracy theory is typically, and properly, about suspicion, not belief. Certainty, even if just psychological, “truth”, is not an option in a responsible citizen. A vehement lament and protest against post-truth is inadequate if it ignores the importance of suspicion. But nothing like suspicion post-trusts and so post-truths. To borrow a lyric from Cohen, “that’s where the light comes in”. And we post-any-century-primates have good reason for suspicion. True, the opening years of the 21st century hit a home-run here, it wasn’t the first or last. If anything is transcendently true, that’s it.

If this functional, suspicious understanding becomes our baseline epistemology (as it is where I live), we might worry catastrophe will ensue. Like leaving a baby alone in a room with a hungry dog. But what actually happens is the dog patiently awaits, ignoring the obvious. Good dog. People and dogs share much. With humans what actually ensues is table talk, memes on the internet, and winks and rolling eyes across the TV room. Formally known as the “living room”, this post-living room space is not grade school and we are not attentive, intimidated students. We’re artists of negative spaces and we usually negotiate them with aplomb. Unless we really think mass media reliability is what post truth is post to. Then, I suppose, catastrophe does ensue: Only a brief emotional one, similar to losing one’s religion, one’s political piety. Cass Sunstien provides,

“Our main policy claim here is…a distinctive tactic for breaking up the hard core of extremists who supply conspiracy theories: cognitive Infiltration of extremist groups, whereby government agents or their allies (acting either virtually or in real space, and either openly or anonymously) will undermine the crippled epistemology of believers by planting doubts about the theories and stylized facts that circulate within such groups.”[14]

Let’s conspire against citizens who worry you might be conspiring against them. Is there anything new here?

Riggio on Post Truth

Like Riggio, I view the existence of political truth as beyond evident. In the face of rhetoric concerning a “post truth” contagion, Riggio counters there is instead a battle for public trust. He’s right. He’s channeling, in fact, Brian Keeley’s classic public trust approach to alternative thought.[15] As with our confidence in science, mainstream media functions the same. But Riggio seems to think it is a new one, and one worth fighting and “winning”. Now what would be winning? As we finally fall asleep at night, we might appreciate this. But not in daylight. There’s no battle for public trust there. Most don’t, but say we do. And that’s a good thing.

Public trust has long ago headed down the yellow brick road with Dorothy in search of a wizard. Lies and compromise are recognized, from all quarters, as our long-term norm. Dorothy’s surprise and the wizard’s protests when he is revealed should hardly surprise. This is the road of the golden calf, representational democracy.

The closer you get to Washington DC, Paris, Beijing, London or the democratic republic of Moscow, the more obvious this perception and reality is. It’s celebrated in transatlantic, transnational airplane conversations that last for hours. It’s palpable before the edifices of any of these capitals’ secular monuments. As palpable before the non-secular: Like standing a few blocks before the Vatican, a previous political model, we can’t really deny it. These edifices now, as they were before, are saturated in farce.[16] Adam Riggio’s impassioned political piece, with his hands on the cold marble, reminds us that being too close to the temple can blind us to its real shape, strength and impressive age. Riggio writes,

[Mainstream media’s behavior] harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry.[17]

I see this as idealized media primitivism, “If only we could go back”. It’s absolutely admirable. But was print media ever supposed to be trusted? Print media set the stage for the invasion of Cuba and Mexico. It suppressed the deadly effects of nuclear testing in in the 1950s and 60s and then promulgated apologetics for the same. Between 1963 and 1967 the Vietnam War was, “the good guys shooting the Reds”.[18]  It played a similar role in Central American intervention, as well as the first and second “gulf” wars, fought deep in the desert. Mainstream media has long been superb at helping start wars, but way late to the anti-war party and poor in slowing or ending the same wars they supported. A post truth world hypothesis predicts this. An interesting point, one more interesting the more intense the consequences are. The more seemingly significant a political event—such as bizarre politics or senseless wars—the more normal it is initially portrayed by mainstream media. Eventually damage control follows. Public trust? Not likely. Certainly not well placed.

❧ ❧ ❧

5. So a final, fifth suggestion: Our paleo post-truth vision taps on our shoulders: The “new normal” political panic concerning a “post truth” world we find in political conversation and in mass media is an ahistorical and ephemeral protest. Our strange amnesia concerning our wars, the conduct of such and their strange results should be evidence enough. Communist Vietnam, with its victory in 1975, was by 1980 a capitalist country par excellence. An old point, going back to Orson Wells’ Citizen Kane. “I remember the good ole days when we had newspapers” seems an unlikely thesis.

Recall Eastern Europe. While giving a talk on conspiracy theories and media in Romania, one that might be characterized as a post truth position on media reliability in times of extreme crisis, the audience found the remarks welcome but fairly obvious. They doubted we of the West really had a free mainstream media in contrast, but they enjoyed the idea, the way we might enjoy a guest’s puppy; he’s cute. The truth can be toxic in many social and political settings. Good arguments indicate mass media hierarchies react accordingly everywhere. Far from being tempted to promulgate such truths, like afore mentioned hungry dog and baby, they leave toxic investigation alone. Why look? Why bite?

Conclusion

Politicization of knowledge is dubious. “Post Truth” is a political term of abuse, one that will quickly pass; a bear trap that springs on any and all. Just before the first World War, in 1912, Bertrand Russell pointed out that the truth “must be strange” about the most ordinary things, like tables or chairs.[19] Are politics, mass media power, any less strange? Now we all stand, down by the river, awaiting the evening’s usual transactions and gunfire.

We live in the united states of amnesia. In the rush of cotemporary civilization, memories are short, attention fractured and concentration quickly perishes. We just move on. The awesome spectacle of seemingly omnipotent governments and ideologically unified corporate global mass media along with a population driven by consumption and hedonism, might create a sense of futility where subversive narratives are concerned. But then in new form the subversive narratives are reborn and powerfully spread. The growing intensity of this cycle should give us pause. Perhaps the answer does not lie in seeking new, remedial, intellectually sophisticated ways to ignore it, but in addressing our information desert, our scarcity of real epistemic access to the information hierarchy hovering above us. And discovering ways this can be reversed in a world of unprecedented connectivity, so epistemic rationality can play a decisive role.[20]

For some this truth about post truth and its vicious ironies creates a scary place. Here on the edge of the United States, people have learned to live through that edge and embrace it. But in cozy heartlands in the US, Canada and Europe, most prefer to die in the comfort of our TV rooms so we don’t die “out there”, as Cormac McCarthy puts it, “…in all that darkness and all that cold”. But when the long reality of a post trust, post truth world is forcibly brought to their attention by real estate developers, some react, like Dorothy, with rage and despair. This is a mistake.

Social epistemology should embrace a socially borne epistemic skepticism. This is not an airborne toxic event, it is fresh air. Social epistemology might not be about explaining what we know so much as explaining what we don’t and the value of this negative space, its inescapability and benefits: The truth about post trust and truth. Post truth is everywhere, not just here on the border. We can’t land in Washington DC at Ronald Regan international airport and escape it. Welcome to the post-truth border, bienvenidos al frontera, where we all live and always have. Certainty is an enemy of the wise. If thought a virtue, representational democracy is the cure.

This returns us to dogs. Dog-like, though we be, primates can certainly learn to look up in intense interest. At the stars, for instance. I oppose The Wall. And can climb it. We don’t know until we go. The border is just beyond your cellar door. Do you live in Boston? There you are. Once you open up, look up. Don’t circle about in tight illusions. Embrace bright, buzzing, booming confusion.[21] You don’t know my real name.

[1] Local area code.

[2] Chilvers, Ian & Glaves-Smith, John eds., Dictionary of Modern and Contemporary Art. Oxford: Oxford University Press, 2009.

[3] The latter are the so called “Winter Texans”. Fleeing the North’s ice and snow, but unwilling to cross the border and venture farther South into Mexico (except for one military controlled, dusty tourist town immediately across the river, wonderfully named “Nuevo Progreso”), they make their home here through fall, winter and spring.

[4] United States Census Bureau. Archived from the original on 2013-09-11. Retrieved 2008-01-31.

[5] DPS officers are not all this way. Many are quite compassionate, and increasingly confused by their massive presence here.

[6] “Agingado”; “becoming a gringo”.

[7] President Porfirio Diaz, “Tan lejos de Dios y tan cerca de los Estados Unidos.”.

[8] See Basham, Lee and Matthew R. X. Dentith. “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 12-19, and subsequent remarks, Dieguez, Sebastian, Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Nicolas Gauvrit, Anthony Lantian & Pascal Wagner-Egger. “’They’ Respond: Comments on Basham et al.’s ‘Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone’.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 20-39. Basham, Lee. “Pathologizing Open Societies: A Reply to the Le Monde Social Scientists.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 59-68.

[9] While a realist about truth, a situational truth agnosticism does not entail warrant/justification agnosticism. We don’t need to know if something is true to know it is probably true, given our best evidence, or probably not true.

[10] The political fate of Bernie Sanders comes to mind. A fine candidate, and my preferred, he was forced to recant at the Democratic Party Convention in 2016. One recalls the Hindenburg.

[11] The usual US suspects include CNN (“Combat News Network” in 2003-10 and more recently, “Clinton News Network”), NBC (“National Bombing Communications”) and FOX (a bit harder to parody due to the “x”, even though Mr. O’Reilly offered his services).

[12] George W. Bush and William J. Clinton.

[13] Walzer, Michael. “International Justice, War Crimes, and Terrorism: The U.S. Record.” Social Research, 69, no. 4 (winter 2002): 936.

[14] Cass Sunstein and Adrian Vermeule, “Conspiracy Theories: Causes and Cures”, University of Chicago Law School Public Law & Legal Theory Research Paper Series Paper No. 199 and University of Chicago Law School Law & Economics Research Paper Series Paper No. 387, 2008, 19, reprinted in the Journal of Political Philosophy, 2009.

[15] Keeley, Brian. “Of Conspiracy Theories”, Journal of Philosophy, 96, no. 3 (1999): 109-26. Keeley’s is a classic, but the Public Trust Approach (PTA) he advocates appears to fail on several levels. See the several critiques by Lee Basham, David Coady, Charles Pigden and Matthew R.X. Dentith.

[16] Not only farce, but a fair share.

[17] Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 71.

[18] See Hallin, Daniel C. The Uncensored War: The Media and Vietnam. New York: Oxford University Press, 1986.

[19] Russell, Bertrand, The Problems of Philosophy, Henry Holt and Company, New York, 1912. Russell continues, “In the following pages I have confined myself in the main to those problems of philosophy in regard to which I thought it possible to say something positive and constructive, since merely negative criticism seemed out of place.”

[20] A paraphrase from, “Conspiracy and Rationality” in Beyond Rationality, Contemporary Issues.Rom Harré and Carl Jenson, eds. Cambridge Scholars, Newcastle (2011): 84-85.

[21] James, William. The Principles of Psychology. Cambridge, MA, Harvard University Press, 1890, page 462.

Author Information: Robert Frodeman, University of North Texas, Robert.Frodeman@unt.edu

Frodeman, Robert. “Socratics and Anti-Socratics: The Status of Expertise.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 42-44.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3AO

Please refer to:

Image credit: J.D. Falk, via flickr

Do we, academically trained and credentialed philosophers, understand what philosophy is? It’s a disquieting question, or would be, if it could be taken seriously. But who can take it seriously? Academic philosophers are the inheritors of more than 100 years of painstaking, peer-reviewed work—to say nothing of centuries of thinking before that. Through these efforts, philosophy has become an area of expertise on a par with other disciplines. The question, then, is silly—or insulting: of course philosophers know their stuff!

But shouldn’t we feel a bit uneasy by this juxtaposition of ‘philosophers’ and ‘know’? We tell our introductory classes that ‘philosopher’ literally means to be a friend or lover of wisdom, rather than to be the actual possessor of it. And that Socrates, the patron saint of philosophy, only claimed to possess ‘Socratic wisdom’—he only knew that he knew nothing. Have we then abandoned our allegiance to Socrates? Or did we never take him seriously? Would philosophers be more candid quoting Hegel, when he noted in the Preface to the Phenomenology of Spirit that his goal was to “lay aside the title ‘love of knowing’ and be actual knowing”? But wouldn’t that mean that philosophers were not really philosophers, but rather sophists?

Two Types of Sophists

The Greeks knew two types of sophists. There were the philosophical sophists, who had skeptical beliefs about the possibilities of knowledge. Protagoras, the most famous of these, claimed that experience is inescapably subjective: the same wind blows both hot and cold depending on the person’s experience. But also, and more simply, sophists were people in the know, or as we say today, experts: people able to instruct young men in skills such as horsemanship, warfare, or public speaking. There are some philosophers today who would place themselves into the first category—for instance, standpoint epistemologists, who sometimes make similar claims in terms of race, class, and gender—but it seems that nearly all philosophers place themselves in the latter category. Philosophers today are experts. Not in philosophy overall, of course, that’s too large of a domain; but in one or another subfield, ethics or logic or the philosophy of language.

It is the subdividing of philosophy that allows philosophers to make claims of expertise. This point was brought home recently in the dustup surrounding Rebecca Tuvel’s Hypatia article “In Defense of Transracialism.” Tuvel’s piece prompted the creation of an Open Letter, which collected more than 800 signatories by the time it was closed. The Letter called on Hypatia to retract publication of her essay. These critics did not merely disagree with her argument; they denied her right to speak on the topic at all. The Letter notes that Tuvel “fails to seek out and sufficiently engage with scholarly work by those who are most vulnerable to the intersection of racial and gender oppressions….”

Tuvel’s article and subsequent publishing of the Open Letter have elicited an extended series of commentaries (including no less than two op-eds in the New York Times). The exact criteria for those who wished to censure Tuvel has varied. Some thought her transgression consisted in the insufficient citing of the literature in the field, while others claimed that her identity was not sufficiently grounded in personal experience of racial and/or gender oppression. In both cases, however, criticism turned on assumptions of expertise. Notably, Tuvel also makes claims of expertise, on her departmental website, as being a specialist in both feminism and the philosophy of race, although she has mostly stayed out of the subsequent back and forth.

My concern, then, is not with pros and cons of Tuvel’s essay. It is rather with the background assumption of expertise that all parties seem to share. I admit that I am not an expert in these areas; but my claim is more fundamental than that. I do not view myself as an expert in any area of philosophy, at least as the term is now used. I have been introduced on occasion as an expert in the philosophy of interdisciplinarity, but this usually prompts me to note that I am only an expert in the impossibility of expertise. Widespread claims to the contrary, interdisciplinarity is perhaps the last thing that someone can be an expert in. At least, the claim cannot be that someone knows the literature of the subject, since the point of interdisciplinarity, if it is something more than another route to academic success, is more political than epistemic in nature.

A Change in Philosophy?

The attitudes revealed by L’Affaire Tuvel (and examples can be multiplied at will[1]) suggests that we are looking at something more than simply another shift in the philosophical tides. There has always been a Hegelian or Cartesian element within philosophy, where philosophers have made claims of possessing apodictic knowledge. There has also always been a Socratic (or to pick a more recent example, Heideggerian) cohort who have emphasized the interrogative nature of philosophy. Heidegger constantly stresses the need to live within the question, whether the question concerns being or technology. He notes as well that his answers, such as they are, are true only approximately and for the most part—zunächst und zumeist. In this he follows Aristotle, who in the Ethics 1.3 pointed out that some areas of inquiry are simply not susceptible to either precision or certainty of knowledge. To my mind, this is the condition of philosophy.

Grant, then, that there have always been two camps on the possibility of expertise in philosophy. But I suggest that the balance between these two positions has shifted, as philosophy has become a creature of the university. The modern research university has its own institutional philosophy: it treats all knowledge democratically, as consisting of regional domains on a common plane. There is no hierarchy of the disciplines, no higher or lower knowledge, no more general or specific knowledge. Researchers in philosophy and the humanities see themselves as fellow specialists, rather than as intellectuals of a markedly different type than those in the natural and social sciences.

Today these assumptions are so deeply embedded that no one bothers to note them at all. Few seriously propose that philosophers might have a role to play other than being an expert, or that our job might be to provoke rather than to answer. I, however, want to raise that very possibility. And operating under the assumption that naming the two positions might help rally troops to their respective standards, let the two camps be designated as the Socratics and the Anti-Socratics.

Part of the attraction that Science and Technology Studies (STS) has held for me has been its undisciplined nature, and the faint hope that it could take over the Socratic role that philosophy has largely abandoned. Of course, the debate between the Socratics and Anti-Socratics rages in STS as well, framed in terms of Low and High Church STS, those who resist STS becoming a discipline and those who see it as part of the necessary maturation of the field. I admit to feeling the attractions of High Church STS, and philosophy: expertise has its prerogatives, chief among them the security of speaking to other ‘experts’ rather than taking on the dangerous task of working in the wider world. But I think I will throw my lot in with the Socratics for a while longer.

References

Aristotle. The Nichomachean Ethics.  Oxford University Press, 2009. https://goo.gl/XCOhe9

Brubakermay, Rogers. “The Uproar Over ‘Transracialism’.”  New York Times. May 18, 2017. https://goo.gl/Qz9BKs https://goo.gl/sTwej9

Frodeman, Robert and Adam Briggle.Socrates Tenured: The Institutions of 21st-Century. London: Rowman & Littlefield, 2016.

Fuller, Steve and James H. Collier. Philosophy, Rhetoric, and the End of Knowledge: A New Beginning for Science and Technology Studies. Mahwah, NJ: Lawrence Erlbaum, 2004.

Hegel, Georg Wilhelm Friedrich. Hegel’s Preface to the “Phenomenology of Spirit”. Translated by Yirmiyahu Yovel. Princeton University Press, 2005.

Schuessler, Jennifer. “A Defense of ‘Transracial’ Identity Roils Philosophy World.” New York Times. May 19, 2017. https://goo.gl/sTwej9

Tuvel, Rebecca. “In Defense of Transracialism.” Hypatia 29 March 2017. doi: 10.1111/hypa.12327

[1] See, for instance, https://goo.gl/QiTyOw.

Author Information: Gregory Sandstrom, Independent Researcher, Ottawa Blockchain Group, gregory.sandstrom@gmail.com

Sandstrom, Gregory. “Who Would Live in a Blockchain Society? The Rise of Cryptographically-Enabled Ledger Communities.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 27-41.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3A8

Image credit: Shutterstock

It has math. It has its computer science. It has its cryptography. It has its economics. It has its political and social philosophy. It was this community that I was immediately drawn into.—Vitalik Buterin, Founder of Ethereum (Zug, Toronto, Moscow)

That was a memorable day for me, for it made great changes in me. But it is the same with any life. Imagine one selected day struck out of it, and think how different its course would have been. Pause you who read this, and think for a moment of the long chain of iron or gold, of thorns or flowers, that would never have bound you, but for the formation of the first link on one memorable day. —Charles Dickens, Great Expectations, 1861

Section 1: Introduction to Blockchainn

In Blockchain Sociology, you are the sociologist. Even if you didn’t take a ‘degree’ in sociology or perhaps even ever take a course in it, you are a sociologist at least in so far as you are part of society, and indeed, of several or many societies. Being part of different societies, you observe them, hold views and opinions about them, collect data about them, shed your own information and actions (recorded by some empirical data collection devices) about them and to them and generally participate in them. In so far as you are that kind of sociologist, you are qualified to be reading this paper, which focuses pop-sociologically on blockchain technology[1] (BC tech) and its potentially major coming impact on societies near and around you and in which you are already a member without needing any sociologist to validate that fact.

This is an exploratory paper on BC that will need to be updated as BC tech develops. BC is an unknown phenomenon still for many people; partly fantasy and partly a soon-to-be new reality. In short, a ‘blockchain’ is simply a chronologically arranged online (internet-based) digital chain of ‘blocks’. [2] A ‘blockchain society’ is thus a kind of semi-fictional representation that harks potentially <15 years into the future, when BC tech will be widely integrated globally and locally into human-social life. In this near-future, I imagine a scenario where ‘machines’ or cybernetic organisms (read: killer robots) haven’t taken over people or led to ‘post-humanity’ (Terminator, Bostrom, Kurzweil, et al.) or ‘trans-humanity’ (Fuller). Rather ‘social machines’ (e.g. BC tech) are used to aid specifically human (homo sapiens sapiens) development.

It is already expected that people will make more and more frequent use of ‘social machines’ (Berners-Lee 1999) where the administration (automation and calculation, etc.) is done by computers, while the creative work and relationships are driven by (still-human) people.[3] This was the vision of one of the main persons who ‘invented’ the internet (protocols) and helped organise and encourage people globally to build it and use it. We thus look in this paper to the impact that this new BC tech can have on societies and their economies through development based on the rise of social machines. Part I focuses on BC basics, as a general introduction to the topic. Part II will focus more on the implications of living in BC societies and how our economies can prepare for the massive restructuring that the technology promises to deliver across a range of education, business, media, governmental and non-governmental sectors.

Figure 1. Humanising the Digital (source unknown)

Section 2: Blockchain Basics

Here we take an early look at what BC societies will become and focus in particular on ‘ledger communities’ (LCs). These LCs establish the basis on which transaction infrastructures may be built using algorithm-guided transaction protocols. Technical details are left out of this paper. It is speculative and probing in character; a fertile ground for ‘social epistemologists’ to explore beyond mere academic philosophy.

The paper underemphasises the cryptography involved, does not even touch on the important asymmetric cryptographic features of BC and simply assumes[4] for the time being that the cryptography enables various levels of anonymity that will allow new LCs to form with at least a basic level of identity safety. Between the lines I hint at concerns coming from a sociologist about what is almost inevitably to happen to societies due to BC tech, while reflecting on current global power structures, repeated social experimental failures of the past and the complex human space-time scales that may eventually be involved with developing BC tech. So it is a half-academic, half-public understanding approach to BC that is prompted by simply being fascinated by BC tech and what it is going to do to the human world as we now know it.

BC tech brings with it an entirely new set of preconditions and possibilities for participation and membership in societies and communities. The technology has immediate social impact when it is implemented because it is pragmatically based on ‘transactions’ that can be chronologically organised, structured and recorded that deal with buying, selling, trading or sharing of assets and values.[5] The value placed on any and all of these digital transactions serves quite ‘naturally’ to develop into a ‘virtual economy’ (Castronova 2006) on local and/or global scales. BC tech thus offers a new kind of societal pattern to which schools, universities, companies, associations, organisations, government ministries, non-government and other community groups may participate, willingly for the betterment of the broader niches within societies that they represent.

Will your future BCs and mine be made “of iron or gold, of thorns or flowers,” in all of the new varieties of human relationships and ‘societies’ that will develop along with them? Let us not be fearful of the unknown, but rather prepare to meet it with both our minds and hearts working ahead of time. In short, this paper with McLuhanite focus is inspired by what the ‘effects’ of BC tech will be when applied across a wide range of social and economic phenomena.[6]

Figure 2. Humanity in the Blockchain (source unknown)

Section 3: Whose Ledgers Do You Share In Today? Whose Will You Share In Tomorrow?

Let us assume that for many people this is an early reading on the topic of BC. Those already familiar with BC tech can skip to section 5. The following definition of BC by Professor Jeremy Clark offers a good start: BC is “a place [ledger] for storing data that is maintained by a network of nodes without anyone in charge.”[7] As can be seen immediately, the significant social implications[8] involve who will be in charge of what and who must ‘take orders’ from those in charge in any given LC. Is order-giving all digitally automated and if so, what might this mean for training in leadership and followership?

On the business side and in the science-technology world of ‘visioneering’ (Cabrera, Davis and Orozco, 2016) ideas, we can easily find a wide variety of definitions of BC,[9] some who call it a “second generation of the internet,”[10] question if it is “the most important IT invention of our age,” hype it into a ‘single source of shared truth,’ or suggest: “We can always trust the blockchain.”[11] Such statements are bound to bring a sound of alarm to sociologists’ attention. In this case economic sociologists are in urgent need for analysis and policy considerations as we prepare to live in BC societies and economies.[12]

It was a few weeks ago (April 2017) during a call with a sociologist friend, when I shared with her my view on the importance of BC tech for sociology. I said that BC is likely to bring about the single biggest revolution in the history of the field. It was as astonishing to have those words come out of my mouth as it was for her to hear them.[13] Nevertheless, I stand by this assertion even without daring many predictions about what this incoming ‘neo-sociology’ will be like. Simply put, BCs are going to significantly impact the way people almost everywhere on Earth live, act and behave in so-called ‘smart communities,’ and thus also how scholars and scientists are able to study the societies and economies in which we live.

Let us turn away from any type of promotional hype to the qualified reflexivity of the academic tongue, which according to Clark, reminds us that “[b]lockchains themselves aren’t a game changer.” Likewise, as HyperLedger executive director Brian Behlendorf cautions, “[t]here are over-inflating expectations [regarding BC] right now,” though along with others[14] he does view it as potentially ‘game-changing’.[15] So what is the promise of BC and BC tech and how it can be applied to people and societies and used globally?

Figure 3. Varieties of Ledgers (source unknown)

Section 4: Distributed Ledger for Consensus

First, let’s start with what we know fairly clearly about BC tech.[16] BC tech makes use of a digital ‘distributed ledger’ (DL) system. This is a collective (communal) bookkeeping or accounting system recording and copying a history of transaction events in each BC. The specific kinds of transaction that may be suitable for such distributed ledgers in on-line and mobile communities are still open to much discussion and debate.[17] The data from BC transactions is codified for application in the various algorithms run by the social machine and cryptographised to enable various levels of user anonymity and thus greater freedom of participation. Nevertheless, issues of transparency and access to any given LC’s recorded history and membership are still unresolved and will inevitably continue to challenge the conversations arising over BC tech. In short, BC tech offers a ‘cryptographically secured DL’[18] that provides people in voluntary LCs a new way of engaging in value-oriented relationships with the aim of making more efficient and equitable exchanges of value.

The DL serves as a kind of organised but decentralised electronic data storage which is arranged as an informational bulletin board accessible to all members of the LC. Users of a BC tech service volunteer to join a LC, wherein all participants share ‘one book’ with distributed access across the system. This is intended to create a kind of ‘immutable’ social history that cannot be corrupted by after-the-fact editing of events because it always leaves a noticeable trail that can be tracked to source of the corruption (see image below for why one can’t cheat BC tech). The DL system thus aims to provide a so-called ‘golden record’ of ‘end-to-end’ (E2E) verifiability, wherein validation of transactions is accepted as completed and irreversible by all members of a LC.

In common parlance, if you don’t want everyone to know everything about you and your possessions; but you want the people that you choose to know enough, in confidence, and to know if they are willing to share, trade or sell ‘in-kind’ knowledge, experience, ideas, votes or ratings with you, then BC tech will be made to deliver this. The technology is meant to lead you to and to facilitate transactions based on shared values and aims in a LC. Nevertheless, you must have the creative personal interest and volunteer to join that LC in the first place.

The LC’s transactions (e.g. purchase, sale, trade, bid, negotiation, auction, vote, authorisation, recommendation, special pass, etc.) are completed and verified by the participant themselves and by any other participants in the transaction. A transaction is completed via a ‘digital signature’ that each person receives with their membership registration and uses to verify their participation in any and all transactions. Participants in a LC receive ‘signing keys’ for transactions, which includes both public and private verification options. The history of these total transactions in a LC comprises a transactional database that confirms the values shared and distributed among participants.

The BC tech uses a time stamp validation that records the time, date and participants in all transactions in the LC. This enables participants in a LC to move various types of digital value (synthetic assets) using a peer-to-peer (P2P) network. The architecture of the P2P network is built around ‘consensus algorithms’ that facilitate the transactions in a LC linked to whatever ‘real life’ institutions, assets and values are involved, including all of the associated services already used by participants in a de-centralised network. The ‘service centre’ in such a network becomes an oxymoron, while service agents will still populate any LC with good service to its members in mind.

The notion that a ‘consensus’ can be reached in a community based on algorithms (i.e. mechanistically) is one of the most contentious issues in a BC society and also one of the most fascinating as it could lead to many new social and economic (niche) communities and configurations. If one doesn’t agree to the rules and regulations of a particular LC, then they simply will not (and even cannot!) join it and thus cannot be forced into accepting any consensus from that community, at least in principle. Instead, competing BCs may temporarily arise based on different sets of rules and regulations to which individuals and groups of people will be able to join and want to join, indeed, will feel compelled to join because many they know will join for mutual benefit. Thus, the spectre of globally widespread BCs must eventually gain the attention of any serious BC sociologist.

Section 5: Sustainable Development, Markets, Morals & Distributions

The largest area of application for BCs is what has become known as ‘sustainable development.’ BCs serve sustainable development goals by providing an opportunity to automate proportional distribution of value contribution (i.e. human, natural, financial, cultural and other ‘resources’) to ‘projects’ built upon transactions that are guided and thus in a sense ‘morally’ conditioned based on the well-being of the LC. Any ‘asset’ or with ‘value’ can be bought, sold, traded or shared publically counts as the domain of BC tech to tackle with higher efficiency, justice and equitability in transactions than pre-BC systems. BC tech thus redefines what a ‘market economy’ means with higher sustainability and improved proportionality at its core because of its new system for measuring, owning and allotting value in social economics, in a way that value can be redistributed quickly within a system. As one of the primary ideologies governing public and private policy now in Canada (where the author is writing from) and also globally is that of ‘sustainable development’ (contrast: ‘millennial evolution’) it has become engrained in the notion of BC sociology and economics from the start.

Thus BC tech in one sense offers a turn of focus away from N. Machiavelli’s view of the ‘state,’ or what we now call ‘nation-state’ toward a new kind of community attitude or ‘social epistemology,’ specifically voiced within a LC, which differs radically from the outdated notions of ‘communism’ from the 20th century that dwindle though continue to the current day. On the level of political economy, BC provides an alternative notion to Machiavellian (western autocratic) individualism by definition in enabling a more ‘trusting’ attitude in communal or group situations and that highlight value transactions in those communities. Thus the new level of BC ‘community morality’ will become a kind of boundary wall for inclusion in or exclusion from a LC, wherein by the will of the each BCs voluntary rules the policy arises simply that ‘dictators are not allowed.’ This move pushes actively into addressing a sociological void against strong-arm ‘hard power’ tactics in negotiations and politics, without necessarily diminishing the creative freedom and moral credibility of active individuals in a LC.

When a BC is built, established and fuelled with members it then must guarantee at least a minimal level of anonymity (as volunteered when joining the LC and agreeing to the rules and regulations of the Genesis Block – Part II) to enable continued participation. The flip-side is that BC requires maximum transparency according to the preferences and automations that people choose in their respective LCs. Thus, the architecture of any BC and the rules and regulations hard coded into the Genesis Block are of crucial significance to the success or failure of any LC. Likewise, if the moderation and service provision for a LC is not maintained appropriately, then it will not generate ‘stick’ or lead to growth, but rather lack of commitment and decline in usage. Thus in part we can answer the questions stated above in Section 3.

Section 6: Blockchain Applications

I hesitate to open this topic much because it is currently filled with both speculation and also practical experimentation. We are witnessing the emergence of BC tech on a global scale and the applications grow on an almost daily basis, so I see little point in categorising them now. The Blockchain Research Institute in Toronto is currently gathering the largest index of BC ‘use cases’ scheduled for completion by autumn 2018. The Delft University of Technology has a BC laboratory,[19] as do many other major universities. Governments have been experimenting with and discussing BC tech as well as a variety of new tech identifiers in business and finance (see references).

FinTech has largely focussed on developing cybercurrencies (Bitcoin, Ripple, etc.), for a variety of practical reasons, e.g. to limit ‘double-spending’ or more closely monitor debt repayment cases. Another aim of some proponents of BC in FinTech comes from the effort on behalf of citizen-consumers to eliminate overseers in financial transactions and thus to create a new kind of money ‘distribution.’ With new companies like Revolut and PayPal already working in the transfer of funds, BC tech looks to expand the reach of automated financial transactions throughout society as social machines start to replace unneeded human interventions in the system. Examples areas where work is already being done to integrate BCs include private stock trading, letters of credit, crowd funding, interbank loans, grants and many more.

The extremely high social value of ‘smart contracts’ (Szabo, Ethereum, more below) will combine with increased exposure to public voting in governance, social organisation and sustainable development. This may lead to more transparent government accountability and a fundamentally different way of determining public service election results and sustainable development policy implementation. Some optimism has been expressed that BC tech will lead to reduced voter fraud, yet the solutions posed also come along with increased mass hacking concerns. The BC feature of multi-party live or delayed computation enables real-time voting updates on bulletin boards [20] and also tentative voting (taking the social temperature first), which allows a person to change their vote before an election (e.g. based on expected or adjusted chance-to-win). Little more needs to be said to provoke interest and controversy; once BC tech and democracy are mentioned in the same sentence, fireworks often ensure.

BC tech has so many potential impacts in areas such as bargaining, auctions and estates, vehicle safety records, legal cost-sharing, real estate, mortgages, securities, lotteries, etc., that it is somewhat daunting to conceive how all of this is going to come about slowly, let alone within the next decade as some proponents are suggesting. BC tech can thus be seen in its early formulations like mathematical Nash equilibrium on humanistic socio-economic steroids.

Figure 4. Traditional and Blockchain Networks (source unknown)

Section 7: Why? So What? Who Cares?

The reorganisation of society that this technology has the potential to enable and indeed, in some ways to require, is simply beyond significant; BC is the massive shift that people have been waiting for since the internet came and even before it. The wide-ranging implications[21] of BC tech should not be discussed, however, without considerable caution to how it will influence the notion of ‘neo-liberal democracy’ as the reigning dominant ideology in the ‘western world.’ Even talk of ‘innovating capitalism’ (Bheemaiah 2017) is bound to raise some peoples’ ire. Especially leftist-leaning political thinkers ought to have started drooling as soon as they heard the word ‘distributed’ because there is a quick path to ‘redistribution’ wherever it substitutes for ‘competition’ in the social economics literature. Yet rightist-leaning political thinkers have equally as much to drool about as it empowers people to ‘become their own currency’ and in that sense to ‘live entrepreneurially’ through voluntary participation in LCs.

When one ramps up their concern with the political economy of BC societies, they realise eventually that by using cryptography, the new cryptocurrencies undermine the future possibility for governments to control the money supply of their citizens. By creating an alternative to fiat money, cryptocurrencies may change the world by themselves, even without BC tech. Yet it is the BC tech that facilitates the ‘virtual economies’ to grow in a sustainable way. BC tech thus makes it possible for cryptocurrencies to arrive and flourish as actual, widely used currencies, which is what LCs are seeking in the use of their own community-based cryptocurrency. This feature alone suggests the possibility of changing power structure in societies away from central banks and financiers towards decentralised voluntary communities of value. The sociological implications of this are not surprisingly incredibly difficult to predict and indeed requires some kind of ‘new sociological imagination’ (Fuller 2006) beyond what is currently available.

There seems to be still something significant missing in the BC ecosystem, however, rather than something crucial broken that can be fixed. Optimism in the BC ecosystem now rides on a big wave and people want to know more about how to get involved in BC tech as builders, facilitators, investors, coders, surveyors, etc. What has been missing so far in most BC journalism and even most of the academic/scholarly contributions is broader exploration into the application and relevance of BC tech to societies, their laws, economies and cultures. We may nevertheless start to investigate the growth of BC societies by tracing the rise of codified transaction infrastructures that thus create a new ‘web’ of LCs.

The proliferation of LCs will create the first example of societies based widely on ‘smart contracts,’ which have become a symbol of justice-seeking, anti-exploitative, democratic economic transactions that are automatically enforced by LC rules and regulations. Already in the mid-1990s, Nick Szabo defined ‘smart contracts’ as, “a computerized transaction protocol that executes the terms of a contract” (1994; see Tapscott and Tapscott, 2016). He calls a smart contract “a set of promises, specified in digital form, including protocols within which the parties perform on these promises.” (Szabo 1996) One of the applications of such contracts is to establish an ownership history of an asset so that potential new buyers of that asset can see its production genesis and value history. Another is to assist in supply chain management in order to improve efficiency and traceability and thus to reduce delays and errors. By distributing data from voluntary transactions across the system the social machines improves the efficiency of the human-driven system in a non-centralised way.

The over-inflated view of a need for constant decentralisation is often linked to promises of lower costs for ‘consumers’ due to the elimination of many artificial and unnecessary 3rd party (middleman) fees. This includes the siphoning of company funds into private hands, sometimes illegally, at the cost of the industry or community, which will be aimed for elimination in LCs. Instead of ‘middlemen,’ BC tech necessitates a whole lot more ‘middle people,’ in the sense of ‘mediators’ and ‘introducers’ who are not necessarily salespersons. The position of a professional ‘BC introducer’ seems to be a functional economic role in the era of BC tech and LCs, while a % of transaction costs facilitated by the LC will determine the salaries and risks of its new financial facilitators.

BC tech will also soon be brought to the forefront of global news when Palantir’s work on ‘Investigative Case Management’ (aka. surveillance) for the USA’s Homeland Security in the United States has reached the completed first stage in autumn 2017. When that happens, a lot of people are going to start caring about BC as it is being applied on a hot-button political issue in the USA. Technology entrepreneur Peter Thiel has been integrating BC into current surveillance methods and practises, which promises to fundamentally reshape the USA’s immigration and deportation programs. (More on this to come in Blockchain Sociology: Part II.)

Section 8: Preliminary Conclusion and Invitation

Why would a sociologist or social epistemologist take interest in BC tech? The answer in short is because those fields need experiments to validate their often highly theoretical and abstract ideas, i.e. to put them to the test. Digitally recorded transactions in a ‘blockchain’ can provide a broader domain for research and experimentation than anything that has been offered in the history of those fields due to the current power of computing. It is a basic question of which sociologists and social epistemologists are going to be early adopters of the technology and which will be laggards, there really isn’t any question of ‘if’ anymore with regard to eventual adoption in this case, it is simply a question of ‘when’.

We see such a shift already with the massive growth of the video game industry, that the habits and choices of players can be studied throughout the course of their time ‘plugged in.’ The same will be the case with LCs because people will be attracted to participate in them as they feed already present interests in the participants and thus provide an incentive to engage with like-minded or similarly inclined people. We thus have a potentially prolific new technological resource, the early phase of a ‘social machine’ that we can use for potential research and development in the BC idea. Our challenge in SSH is how to humanise this technology, so that we would not lose from humanity more than we gain by it (Postman’s dictum).

A major feature of BC tech is trust; if you feel you can trust a community (on-line or off-line), that it holds your best interests and personhood in mind on any given topic or activity, then it tends to be easier to ‘deal’ with them, including engaging with others in transactions of value. BC tech enables this in a new quasi-manufactured way through pre-agreed smart contracts that provide higher accountability, transparency and anonymity. Based on voluntary value transactions that are automated using agreement algorithms, BC tech has been suggested as a way to produce ‘upfront compliancy,’ which can protect parties from various risks involving other members in the LC. These include agreement violations wherein BC tech can pinpoint the source of a broken contract or shady deal from within a chain of otherwise difficult to trace transactions. In short, in agreeing to join a LC, one agrees to abide by the rules, which must come with in-house punishment for (attempted) violators or offenders; this is the cost of automated convenience that serves our decision-making capacities.

Figure 5. Why You Can’t Cheat at Bitcoin (source unknown)

Already with the new US government regime we are seeing a debate over the ‘unmasking’ of citizens by politicians and national or international ‘intelligence’ agencies. And soon we will witness a major overhaul in migration and policing matters related to immigrants with the help of BC tech. We cannot therefore endorse BC as simply roses without thorns, which is a large factor that figures into writing this short paper on BC tech and sociology. While anonymising citizens for their own protection could benefit some citizens who feel isolated, marginalised or disempowered in the current social systems they live in, it could also potentially become a weapon of control over members within LCs or over entire LCs themselves, if the wrong Genesis Block framework is patterned into the coding. It could thus lead to dehumanising people who for whatever variety of reasons aren’t able to choose their own preferred pattern of anonymity due to internal or external LC pressures, and who thus become ‘orphans’ in the new BC society.

Is BC like what tech entrepreneur and BC proponent Alex Tapscott suggests, “something that could change basically every industry in the world”?[22] The government of Canada’s Department of Innovation, Science and Economic Development funded the research for the Tapscott’s recent 2017 BC Corridor report. They are currently raising the flag for major transformation in the short-term based on BC tech and are moving to back-up and stabilise this claim with research and development projects, calling for research into use cases and the drawing up of white papers. This leads me to believe BC is on the cutting-edge of public-private partnerships, flexibly scalable networks and sustainable developments as we learn about the impact of BC tech on society, economics and culture as the 21st century moves forward.

References

Berners-Lee, Tim with Mark Fischetti. Weaving the Web. New York: HarperCollins, 1999.

Bheemaiah, Kariappa. The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory. Paris: Apress, 2017.

Blockchain Research Institute. “Blockchain Transformations.” The Tapscott Group, 2017.

Buterin, Vitalik. “Ethereum. White Paper: A Next-Generation Smart Contract and Decentralized Application Platform.” Ethereum, 2014.

Cabrera, Laura, William Davis, Melissa Orozco. “Visioneering Our Future.” In The Future of Social Epistemology: A Collective Vision, edited by James Collier, 199-218. London: Rowman & Littlefield, 2016.

Castronova, Edward “Synthetic Economies and the Social Question.” First Monday, Special Issue no. 7, 2006.http://www.firstmonday.org/ISSUES/special11_9/castronova/index.html

Castronova, Edward. “Virtual Worlds: A FirstHand Account of Market and Society on the Cyberian Frontier.” In The Game Design Reader: A Rules of Play Anthology, edited by  Katie Salen Tekinbaş and Eric Zimmerman, 814–863. Cambridge, MA: MIT Press, 2006a.

Castronova, Edward. “On Virtual Economies.” Game Studies 3, no. 2 (2003). http://www.gamestudies.org/0302/castronova/

Clark, Jeremy. “Blockchain Based Voting: Potential and Limitations.” MIT talk, 2016.

De Filippi, Primavera. “What Blockchain Means for the Sharing Economy.” 2017. https://hbr.org/2017/03/what-blockchain-means-for-the-sharing-economy

del Castillo, Michael. “The IMF Just Finished its First ‘High Level’ Meeting on Blockchain.” CoinDesk, April 19, 2017. http://www.coindesk.com/imf-just-finished-first-high-level-meeting-blockchain/

Fuller, Steve. The New Sociological Imagination. London: Sage, 2006.

Gates Foundation. “Level One Project: Designing a New System for Financial Inclusion.” 2017.  www.leveloneproject.org

Iansiti, Marco and Karim R, Lakhani. “The Truth about Blockchain.” Harvard Business Review, January–February (2017): 118–127. https://hbr.org/2017/01/the-truth-about-blockchain

Knight, Will “The Technology Behind Bitcoin Is Shaking Up Much More Than Money.” MIT Technology Review, 2017.https://www.technologyreview.com/s/604148/the-technology-behind-bitcoin-is-shaking-up-much-more-than-money/

Koven, Jackie Burns “Block the Vote: Could Blockchain Technology Cybersecure Elections?” 2016.  https://www.forbes.com/sites/realspin/2016/08/30/block-the-vote-could-blockchain-technology-cybersecure-elections/#43a111292ab3

Lehdonvirta, Vili and Edward Castronova. Virtual Economies: Design and Analysis. MIT Press, 2014.

Lamport, Leslie, Robert Shostak and Marshall Pease “The Byzantine Generals Problem.”  ACM Transactions on Programming Languages and Systems 4, no. 3 (1982): 382–401.

Nagpal, Rohas.  “2020 AD – Planet Earth on a Blockchain.” 2017.  https://www.linkedin.com/pulse/2020-ad-planet-earth-blockchain-rohas-nagpal

Nakamoto, Satoshi. “Bitcoin: A Peer-to-Peer Electronic Cash System.” ABC, 2009.

Naughton, John. “Is Blockchain the Most Important IT Invention of our Age?” 2016. https://www.theguardian.com/commentisfree/2016/jan/24/blockchain-bitcoin-technology-most-important-tech-invention-of-our-age-sir-mark-walport

O’Byrne, W. Ian “What is Blockchain?” 2016. https://medium.com/badge-chain/what-is-blockchain-5e4498f05c20

Orcutt, Mike. “Why Bitcoin Could Be Much More Than a Currency.” 2015. https://www.technologyreview.com/s/537246/why-bitcoin-could-be-much-more-than-a-currency/

Pilkington, Marc “Blockchain Technology: Principles and Applications.” In Research Handbook on Digital Transformations, edited by F. Xavier Olleros and Majlinda Zhegu, 225-253. Cheltenham, UK: Edward Elgar, 2016.

Reijers, Wessel and Mark Coeckelbergh. “The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies.” Philosophy & Technology (2016): 1-28. https://link.springer.com/article/10.1007/s13347-016-0239-x

Simonite, Tom. “What Bitcoin Is, and Why It Matters.” 2011. https://www.technologyreview.com/s/424091/what-bitcoin-is-and-why-it-matters/

Smart Paul R. and Nigel R. Shadbolt. “Social Machines.” ePrints Soton: University of Southampton (2014): https://eprints.soton.ac.uk/361399/1/SocialMachinesv8.pdf

Stagars, Manuel “Blockchain and Us.” 2017.https://www.youtube.com/watch?v=2iF73cybTBs&feature=youtu.be/ http://blockchain-documentary.com/

Swan, Melanie. Blockchain: Blueprint for a New Economy. Sebastopol: CA: O’Reilly, 2015.

Szabo, Nick. “Smart Contracts: Building Blocks for Digital Markets.” 1996. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html

Tapscott, Alex. “Blockchain is a Disruption We Simply Have to Embrace.” The Globe and Mail, May 9, 2016. http://www.theglobeandmail.com/report-on-business/rob-commentary/blockchain-is-a-disruption-we-simply-have-to embrace/article29936789/

Tapscott, Don and Alex Tapscott. “Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business and the World.” New York: Penguin, 2016.

Tapscott, Don and Alex Tapscott.. “The Blockchain Corridor: Building an Innovation Economy in the 2nd Era of the Internet.” The Tapscott Group, 2017.

UK Government Chief Scientific Adviser “Distributed Ledger Technology: Beyond Block Chain.” Government Office for Science, 2016.

Willms, Jessie. “Don Tapscott Announces International Blockchain Research Institute.” 2017. https://bitcoinmagazine.com/articles/don-tapscott-announces-international-blockchain-research-institute/

[1] First launched under brand ‘Bitcoin’ by unknown person ‘Satoshi Nakamoto’ 03-01-2009.

[2] https://bitcoin.org/en/glossary/block-chain

[3] Berners-Lee writes of “interconnected groups of people acting as if they shared a larger intuitive brain,” defining social machines on the internet as “processes in which the people do the creative work and the machine does the administration.” (1999) Smart et al. provide an updated version: “Social Machines are Web-based socio-technical systems in which the human and technological elements play the role of participant machinery with respect to the mechanistic realisation of system level processes” (2013).

[4] A rather big assumption that deals with security, identity, justice, and other controversial issues.

[5] “If the Internet was the first native digital format for information, then blockchain is the first native digital format for value—a new medium for money. It acts as ledger of accounts, database, notary, sentry and clearing house, all by consensus. And it holds the potential to make financial markets radically more efficient, secure, inclusive and transparent.”—Alex Tapscott http://www.theglobeandmail.com/report-on-business/rob-commentary/blockchain-is-a-disruption-we-simply-have-to-embrace/article29936789/

[6] “Blockchain is a foundational technology: It has the potential to create new foundations for our economic and social systems” (Iansiti and Lakhani, 2017).

[7] Jeremy Clark, Concordia University, MIT talk, 2016. https://users.encs.concordia.ca/~clark/talks/2016_edemocracy.pdf

[8] “[Blockchain] is a very important, new technology that could have implications for the way in which transactions are handled throughout the financial system.”—Janet Yellin (USA Federal Reserve Chairwoman).

[9] “A blockchain is a write-only database dispersed over a network of interconnected computers that uses cryptography (the computerized encoding and decoding of information) to create a tamperproof public record of transactions. Blockchain technology is transparent, secure and decentralised, meaning no central actor can alter the public record. In addition, financial transactions carried out on blockchains are cheaper and faster than those performed by traditional financial institutions.”—Government of Canada (“Blockchain Technology Brief,” 2016, page 1) / “A blockchain is a peer-to-peer distributed ledger forged by consensus, combined with a system for “smart contracts” and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints” (https://www.hyperledger.org/about). “A blockchain is a decentralised, online record-keeping system, or ledger, maintained by a network of computers that verify and record transactions using established cryptographic techniques.”—Mike Orcutt (“Congress Takes Blockchain 101.” https://www.technologyreview.com/s/603820/congress-takes-blockchain-101/?utm_campaign=add_this&utm_source=email&utm_medium=post. “A blockchain is a type of distributed ledger, comprised of unchangable, digitally recorded data in packages called blocks (rather like collating them on to a single sheet of paper). Each block is then ‘chained’ to the next block, using a cryptographic signature. This allows block chains to be used like a ledger, which can be shared and accessed by anyone with the appropriate permissions.” (http://www.blockchaintechnologies.com/blockchain-glossary) / Blockchain is “a magic computer that anyone can upload programs to and leave the programs to self-execute, where the current and all previous states of every program are always publicly visible, and which carries a Blockchain Technology: Principles and Applications very strong cryptoeconomically secured guarantee that programs running on the chain will continue to execute in exactly the way that the blockchain protocol specifies.”—Vitalik Buterin (Visions, Part 1: The Value of Blockchain Technology. Ethereum Blog. https://blog.ethereum.org/2015/04/13/visions-part-1the-value-of-blockchain-technology/).

[10] Don & Alex Tapscott 2017, page 4.

[11] https://www.youtube.com/watch?v=oSP-taqLWPQ—02:40

[12] Most of the literature so far focuses on BC economies and I am not aware of any papers so far written about BC societies.

[13] As soon as I mentioned that Peter Thiel’s Palantir is building a BC for the USA’s Department of Homeland Security (more on this in Part II), she realised the seriousness of the endeavour.

[14] “Blockchain [technology] has the potential to address some of the world’s most pressing challenges.”— Ross Mauri, IBM, quoted in Jessie Willms, “Don Tapscott Announces International Blockchain Research Institute”, Bitcoin Magazine, March 17, 2017. https://bitcoinmagazine.com/articles/don-tapscott-announces-international-blockchain-research-institute/.

[15] “There are plenty of reasons to be skeptical, and there’s way too much hype,” [Brian Behlendorf] said. “But it’s a real opportunity to change the rules of the game.” Brian Behlendorf quoted in Will Knight, “The Technology Behind Bitcoin Is Shaking Up Much More Than Money.” MIT Technology Review, 2017. https://www.technologyreview.com/s/604148/the-technology-behind-bitcoin-is-shaking-up-much-more-than-money/

[16] “The term ‘blockchain technology’ means distributed ledger technology that uses a consensus of replicated, shared, and synchronized digital data that is geo-graphically spread across multiple digital systems.” (https://github.com/InstituteOfCommerce/International/wiki/Definitions:-Blockchain-&-Smart-Contract)

[17]http://www.multichain.com/blog/2015/11/avoiding-pointless-blockchain-project/ https://medium.com/badge-chain/avoiding-pointless-open-badges-related-blockchain-projects-64fb3ddc240c

[18] HT: Vytautas Kaseta (Private conversation, Vilnius, March 2017).

[19] http://www.blockchain-lab.org/

[20] “All cryptographic voting systems use a ‘bulletin board:’ an append-only broadcast channel (sometimes anonymous) … Blockchains are the best bulletin boards we have ever seen, better than purpose-build ones (esp. on equivocation).”—Jeremy Clark (“Blockchain based Voting: Potential and Limitations,” MIT talk, 2016).

[21] “Anything that would benefit from having information stored in an unchangeable database that is not owned or controlled by any single entity and yet is accessible from anywhere at any time.”—Mike Orcutt (https://www.technologyreview.com/s/537246/why-bitcoin-could-be-much-more-than-a-currency/)

[22] http://www.ottawabullion.com/the-imf-just-finished-its-first-high-level-meeting-on-blockchain/

Author Information: Ben Ross, University of North Texas, benjamin.ross@my.unt.edu

Ross, Ben. “Between Poison and Remedy: Transhumanism as Pharmakon.Social Epistemology Review and Reply Collective 6, no. 5 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3zU

Please refer to:

Image credit: Jennifer Boyer, via flickr

As a Millennial, I have the luxury of being able to ask in all seriousness, “Will I be the first generation safe from death by old age?” While the prospects of answering in the affirmative may be dim, they are not preposterous. The idea that such a question can even be asked with sincerity, however, testifies to transhumanism’s reach into the cultural imagination.

But what is transhumanism? Until now, we have failed to answer in the appropriate way, remaining content to describe its possible technological manifestations or trace its historical development. Therefore, I would like to propose an ontology of transhumanism. When philosophers speak of ontologies, they are asking a basic question about the being of a thing—what is its essence? I suggest that transhumanism is best understood as a pharmakon.

Transhumanism as a Pharmakon

Derrida points out in his essay “Plato’s Pharmacy” that while pharmakon can be translated as “drug,” it means both “remedy” and “poison.” It is an ambiguous in-between, containing opposite definitions that can both be true depending on the context. As Michael Rinella notes, hemlock, most famous for being the poison that killed Socrates, when taken in smaller doses induces “delirium and excitement on the one hand,” yet it can be “a powerful sedative on the other” (160). Rinella also goes on to say that there are more than two meanings to the term. While the word was used to denote a drug, Plato “used pharmakon to mean a host of other things, such as pictorial color, painter’s pigment, cosmetic application, perfume, magical talisman, and recreational intoxicant.” Nevertheless, Rinella makes the crucial remark that “One pharmakon might be prescribed as a remedy for another pharmakon, in an attempt to restore to its previous state an identity effaced when intoxicant turned toxic” (237-238). It is precisely this “two-in-one” aspect of the application of a pharmakon that reveals it to be the essence of transhumanism; it can be both poison and remedy.

To further this analysis, consider “super longevity,” which is the subset of transhumanism concerned with avoiding death. As Harari writes in Homo Deus, “Modern science and modern culture…don’t think of death as a metaphysical mystery…for modern people death is a technical problem that we can and should solve.” After all, he declares, “Humans always die due to some technical glitch” (22). These technical glitches, i.e. when one’s heart ceases to pump blood, are the bane of researchers like Aubrey de Grey, and fixing them forms the focus of his “Strategies for Engineered Negligible Senescence.” There is nothing in de Grey’s approach to suggest that there is any human technical problem that does not potentially have a human technical solution. Grey’s techno-optimism represents the “remedy-aspect” of transhumanism as a view in which any problems—even those caused by technology—can be solved by technology.

As a “remedy,” transhumanism is based on a faith in technological progress, despite such progress being uneven, with beneficial effects that are not immediately apparent. For example, even if de Grey’s research does not result in the “cure” for death, his insight into anti-aging techniques and the resulting applications still have the potential to improve a person’s quality of life. This reflects Max More’s definition of transhumanism as “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (3).

Importantly, More’s definition emphasizes transcendent enhancement, and it is this desire to be “upgraded” which distinguishes transhumanism. An illustration of the emergence of the upgrade mentality can be seen in the history of plastic surgery. Harari writes that while modern plastic surgery was born during the First World War as a treatment to repair facial injuries, upon the war’s end, surgeons found that the same techniques could be applied not to damaged noses, but to “ugly” ones, and “though plastic surgery continued to help the sick and wounded…it devoted increasing attention to upgrading the healthy” (52). Through its secondary use as an elective surgery of enhancement rather than exclusively as a technique for healing, one can see an example of the evolution of transhumanist philosophy out of medical philosophy—if the technology exists to change one’s face (and they have they money for it), a person should be morphologically free to take advantage of the enhancing capabilities of such a procedure.

However, to take a view of a person only as “waiting to be upgraded” marks the genesis of the “poison-aspect” of transhumanism as a pharmakon. One need not look farther than Martin Heidegger to find an account of this danger. In his 1954 essay, “The Question Concerning Technology,” Heidegger suggests that the threat of technology is ge-stell, or “enframing,” the way in which technology reveals the world to us primarily as a stock of resources to be manipulated. For him, the “threat” is not a technical problem for which there is a technical solution, but rather it is an ontological condition from which we can be saved—a condition which prevents us from seeing the world in any other way. Transhumanism in its “poison mode,” then, is the technological understanding of being—a singular way of viewing the world as a resource waiting to be enhanced. And what is problematic is that this way of revealing the world comes to dominate all others. In other words, the technological understanding of being comes to be the understanding of being.

However, a careful reading of Heidegger’s essay suggests that it is not a techno-pessimist’s manifesto. Technology has pearls concealed within its perils. Heidegger suggests as much when he quotes Hölderlin, “But where danger is, grows the saving power also” (333). Heidegger is asking the reader to avoid either/or dichotomous thinking about the essence of technology as something that is either dangerous or helpful, and instead to see it as a two-in-one. He goes to great lengths to point out that the “saving power” of technology, which is to say, of transhumanism, is that its essence is ambiguous—it is a pharmakon. Thus, the self-same instrumentalization that threatens to narrow our understanding of being also has the power to save us and force a consideration of new ways of being, and most importantly for Heidegger, new meanings of being.

Curing Death?

A transhumanist, and therefore pharmacological, take on Heidegger’s admonishment might be something as follows: In the future it is possible that a “cure” for death will threaten what we now know as death as a source of meaning in society—especially as it relates to a Christian heaven in which one yearns to spend an eternity, sans mortal coil. While the arrival of a death-cure will prove to be “poison” for a traditional understanding of Christianity, that same techno-humanistic artifact will simultaneously function as a “remedy,” spurring a Nietzschean transvaluation of values—that is, such a “cure” will arrive as a technological Zarathustra, forcing a confrontation with meaning, bringing news that “the human being is something that must be overcome” and urging us to ask anew, “what have you done to overcome him?” At the very least, as Steve Fuller recently pointed out in an interview, “transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection.” For those sympathetic to Leon Kass’ brand of repugnance, such suggestions are poison, and yet for a transhumanist such suggestions are a remedy to the glitch called death and the ways in which we relate to our finitude.

A more mundane example of the simultaneous danger and saving power of technology might be the much-hyped Google Glass—or in more transhuman terms, having Google Glass implanted into one’s eye sockets. While this procedure may conceal other ways of understanding the spaces and people surrounding the wearer other than through the medium of the lenses, the lenses simultaneously have the power to reveal entirely new layers of information about the world and connect the wearer to the environment and to others in new ways.

With these examples it is perhaps becoming clear that by re-casting the essence of transhumanism as a pharmakon instead of an either/or dichotomy of purely techno-optimistic panacea or purely techno-pessimistic miasma, a more inclusive picture of transhumanist ontology emerges. Transhumanism can be both—cause and cure, danger and savior, threat and opportunity. Max More’s analysis, too, has a pharmacological flavor in that transhumanism, though committed to improving the human condition, has no illusions that, “The same powerful technologies that can transform human nature for the better could also be used in ways that, intentionally or unintentionally, cause direct damage or more subtly undermine our lives” (4).

Perhaps, then, More might agree that as a pharmakon, transhumanism is a Schrödinger’s cat always in a state of superposition—both alive and dead in the box. In the Copenhagen interpretation, a system stops being in a superposition of states and becomes either one or the other when an observation takes place. Transhumanism, too, is observer-dependent. For Ray Kurzweil, looking in the box, the cat is always alive with the techno-optimistic possibility of download into silicon and the singularity is near. For Ted Kaczynski, the cat is always dead, and it is worth killing in order to prevent its resurrection. Therefore, what the foregoing analysis suggests is that transhumanism is a drug—it is both remedy and poison—with the power to cure or the power to kill depending on who takes it. If the essence of transhumanism is elusive, it is precisely because it is a pharmakon cutting across categories ordinarily seen as mutually exclusive, forcing an ontological quest to conceptualize the in-between.

References

Derrida, Jacques. “Plato’s Pharmacy.” In Dissemination, translated by Barbara Johnson, 63-171. Chicago: University of Chicago Press, 1981.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. HarperCollins, 2017.

Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell. Harper & Row, 1977.

More, Max. “The Philosophy of Transhumanism,” In The Transhumanist Reader, edited by Max More and Natasha Vita-More, 3-17. Malden, MA: Wiley-Blackwell, 2013.

Rinella, Michael A. Pharmakon: Plato, Drug Culture, and Identity in Ancient Athens. Lanham, MD: Lexington Books, 2010.

Author Information: Justin Cruickshank, University of Birmingham, j.cruickshank@bham.ac.uk

Cruickshank, Justin. “Meritocracy and Reification.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 4-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3zi

Please refer to:

Image credit: russell davies, via flickr

 

My article ‘Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practice’ rejected the notion that Popper was a dogmatic liberal technocrat who fetishized the epistemic authority of science and the epistemic and ethical authority of free markets. Instead, it stressed how Popper sought to develop the recognition of fallibilism into a philosophy of dialogue where criticism replaced appeals to any form of dialogue-stopping source of epistemic and ethical authority. The debate that followed in the SERRC, and the book based on this (Democratic Problem-Solving: Dialogues in Social Epistemology), ranged over a number of issues. These included: the way prevailing traditions or paradigms heavily mediate the reception of ideas; whether public intellectuals were needed to improve public dialogue; the neoliberal turn in higher education; and the way neoconservatism is used to construct public imaginaries that present certain groups as ‘enemies within’.

Throughout these discussions Ioana and I / I argued against elitist and hierarchical positions that sought to delimit what was discussed or who had the right to impose the terms of reference for discussion, based on an appeal to some form of epistemic-institutional source of knowledge. Horizontalist dialogue between different group tackling problems caused by neoliberalism and prejudice was advocated in place of vertical instruction where an expert or political elite set the terms of reference or monologically dispensed ideas for others, presumed to be passive and ignorant, to accept. Our main interlocutor, Raphael Sassower, put more emphasis on appeals to epistemic-institutional sources of authority, with it being argued, for instance, that public intellectuals were of use in shaping how lay agents engaged with the state.

One issue implicitly raised by this was that of whether a meritocracy, if realised, would make liberal capitalism legitimate, by removing the prejudice and structural disadvantage that many groups face. I argue that attempts to use this concept to legitimise liberal capitalism end up reifying all agents, no matter what their place in the status-class hierarchy. This reification undermines the development of a more dialogic democracy where people seek to work with others to gain control over the state and elites. In place of both well-meaning and more cynical—neoliberal—appeals to meritocracy rewarding educational-intellectual performance with a well-paid job, it is argued that the focus needs to be on critical pedagogy which can develop a more dialogic education and democracy. Such an approach would avoid reification and the legitimisation of existing hierarchies by rejecting their claims to epistemic and ethical authority.

Education, Economics and Punishment

Protestors responding to Edward Snowden’s 2013 revelations about the extent of surveillance carried out by the NSA (National Security Agency) held placards saying that Orwell’s ‘1984’ was a warning and not an instruction manual. Decades earlier the socialist, social reformer and Labour MP Michael Young witnessed a change in language akin to seeing the phrase ‘double plus good’ come into popular use to appraise government policies.

Young supported many successful educational reforms. These included the reduction in ‘grammar schools’, which are state schools that select pupils, and the removal in most counties of the ‘11-plus’, which was the intelligence test used to select a small number of pupils for grammar schools. While children at grammar schools studied A levels and were expected to go to university, the rest were expected to leave school at 16 for unskilled jobs or, if they were lucky, apprenticeships. For critics of the 11-plus and grammar schools, the test’s objectivity was at best moot and its consequence was to reinforce not just an economic hierarchy but an affective-symbolic status hierarchy too. The majority who did not go to grammar school and university were constructed as ‘failures’ who deserved to be in subordinate positions to those constructed as ‘naturally’ superior.

Given the very small number of working class pupils who passed the 11-plus the consequence of this was to legitimise the existing class hierarchy by presenting the middle class and upper class as naturally superior. Interestingly, following the drive to create a mass higher education system in the UK, it has become evident that pupils from ‘public schools’ (that is, fee paying schools) often fare worse at university than working class pupils with the same or similar A level grades, because the latter got the grades with fewer resources spent on them. Such working class pupils would have attended ‘comprehensive’ (non-selective) state schools (Preston 2014).

Young also helped establish the Open University (OU), which allowed mature students to study for a degree. Many of students of the OU, who successfully graduated, had failed their 11-plus, and the OU attracted academics regarded as famous intellectuals, such as Stuart Hall, to design courses and deliver some of lectures, which were broadcast on BBC2 (a state-owned TV channel dedicated to ‘high cultural’ and education programming). Unfortunately though, recent studies have confirmed that class bias exists in job selection.

Graduate job applicants are judged in terms of their accent and mannerism, with ‘posh’ behaviours being taken as a proxy for intelligence or at least a signal that the applicant will be easier to work with than someone lacking the same ‘cultural capital’ (Weaver 2015). Furthermore, rich parents are able to fund their student-offspring’s unpaid internships in corporations and government in expensive cities, creating networking opportunities and a way to build experience for the CV, with this ‘social capital’ being impossible for students who do not come from rich backgrounds (Britton, Dearden, Shephard and Vignoles 2016). People may respond to this by calling for an increased ‘meritocracy’ but Young, who coined the term ‘meritocracy’, was opposed to the idea of legitimising a new hierarchy.

As a warning about a society that wished to eschew egalitarianism, he wrote ‘The Rise of the Meritocracy: 1870-2033’ (1994 [1958]). This was a satirical vision of a dystopian future where class hierarchy based on a small number of extremely rich individuals having power because of inherited wealth and privilege had been replaced by a hierarchy based on ‘merit’ defined as intellectual ability plus effort. Young coined the term ‘meritocracy’ to define the latter type of society. While this is now considered an honorific term, Young used his neologism in a pejorative way. Obviously Young saw the existing class hierarchy based on privilege as illegitimate, but for him replacing it with a meritocracy was unacceptable because such a society would end up with a rigid and absolute hierarchy between those seen as the deserving powerful and rich and those seen as the undeserving mass with no power and money. A meritocracy would be ruthless and inhuman for segregating people and defining many as biologically inadequate and of lower worth in every sense than others.

In his book the narrator describes how with no intelligent leaders the lower classes will remain at worst sullen or rebellious in a directionless way which the police ‘with their new weapons’ will be able to repress efficiently. Those not in the elite are spoken of in a dehumanised way as chattel which reflects the way the old class privileged elite saw the working and middle classes, having been socialised at private school and Oxbridge into the view that the upper class were entitled to rule. At the end of the book the publisher inserts a footnote to say the narrator has been killed at Peterloo. Young then saw the defining concept of his dystopian vision become used by all mainstream politicians and commentators to assess policies and the normative aspirations that were meant to inform them. He was particularly incensed by the way New Labour under Tony Blair endorsed the principle of meritocracy.

In 2001 Young wrote an article for the Guardian entitled ‘Down with Meritocracy’ where he lambasted those who had been selected through meritocratic education, as he saw it, arguing that they were ‘insufferably smug’ and so self-assured that ‘there is almost no block on the rewards they arrogate to themselves’ (Young 2001). He hoped for New Labour to use progressive taxation to tackle the greed and power of the new meritocratic elite but realised that would mark a big change away from New Labour’s pro-capitalist values.

Against Young, I hold that while progressive policies narrowed the income gap in the post-war years, class privilege and widening income inequality has defined UK capitalism since the rise of neoliberalism in the late 1970s. In graduate recruitment the top jobs are allocated not on grades—or ‘merit’—alone but on class background and internship experience that can only be attained from a rich background, and the amount of wealth accrued by those in the top 1% is increasing, while others have a reducing share of national wealth (Harvey 2005). Indeed, middle class jobs are now becoming precarious with a lot of people in both the middle and working class being forced to become self-employed and pay for their own training, have no sick pay and holiday pay, and be entirely responsible for their pension, etc. This is presented as liberating employees but is a sign of the current weakness of deunionised labour to resist the imposition of insecurity following a recession (Friedman 2014).

After Young’s death, the university fee was tripled to £9000 (in 2012) and the government’s accountants estimate that around half of these loans will not be repaid in full (McGettigan 2013). The changes to higher education did not end there and the current Conservative government hoped to ‘liberalise’ the ‘market’ in higher education, in England, by encouraging the extensive start-up of for-profit providers, through deregulation, despite the problems this created in the US, which the Harkin Report catalogued. Resistance from the House of Lords and a desire to push the legislation through Parliament before it is prorogued for the hastily called General Election in May—widely seen as a vote on the Conservative vision of Brexit—led to compromise, with new providers still needing to be validated by existing providers.

The Conservatives also set up a new audit regime to measure teaching ‘excellence’, called the ‘Teaching Excellence Framework’ (TEF). This will measure teaching quality in part by using employment data and data from the National Student Survey (NSS), completed by all third-year undergraduates, despite the NSS being specifically not designed to be used in a comparative way, with NSS data not furnishing meaningful deviations from the mean (Cheng and Marsh 2010; HEFCE 2001). A high TEF score would then be used to permit universities to raise the tuition fee in line with inflation (for discussion of these proposed changes see: Cruickshank 2016, 2017; Holmwood 2015a, 2015b).

The Lords argued that the fee increases had to be decoupled from TEF scoring. In a compromise, the fee can rise with inflation every year, for institutions who will take part in the TEF, with no link to a TEF score, until 2020, at which point full-inflation rises will be connected to TEF scoring. One possible consequence of linking scores to fee increases is that grade inflation will continue and that recruitment will be biased towards students from more privileged backgrounds, in Russell Group universities and middle ranking universities, with such students being seen as more likely to be employment ‘successes’, thanks to class privilege.

Such moves mark an intensified attempt definitively to redefine students as customers. For the Conservatives, and those in Labour who supported the Browne Review that led to the tripling of the fee, and the Liberal Democrats who were in coalition with the Conservatives, education and especially higher education, are to be defined in terms of customers making ‘investments’ in their ‘human capital’, to gain market advantage over other students competing for jobs. Education was not to be see in terms of being good in itself and good for fostering the development of critical and informed public.

For many politicians, students were not to see education as a ‘public good’ (with a well-informed critical public benefitting democracy) but a ‘positional good’ in market competition (Holmwood 2011). Brown (2015) argues that under neoliberalism a market rationality becomes ubiquitous with domains outside market competition being defined as analogous to competitive market relations. Here though education is redefined as being an actual commodity for instrumental use in competitive market relations between customers of human capital seeking advantage over each other. All of this is quite overt in the government’s Green Paper and White Paper on changing higher education. In these documents, it is made clear that customers are expected—and ‘enabled’, by the changes proposed—to make the correct investment in their human capital.

Customers will have TEF data and government controlled price signals to go on when it comes to judging the usefulness of a human capital investment and they will be further enabled by having a greater range of providers to choose from, with for-profits offering lower priced vocational training degrees, assumed to be more attractive to potential customers in ‘hard to reach’ disadvantaged communities. The government documents also make it clear that the customer is to be of use to the economy and to not be a burden by being underemployed and failing to pay back all their student loan (for discussions of these issues see: Collini 2012, 2017; Cruickshank 2016; the debates between Cruickshank and Chis, and Sassower, in Cruickshank and Sassower 2017; Holmwood 2011; Holmwood, Hickey, Cohen and Wallis 2016).

Obviously, there is a tension with such arguments. On the one hand, the market is seen to be a way to realise a meritocracy, with customers investing in the right human capital to succeed in a zero-sum competition with fellow customers. On the other hand, the market is not so much just a means to realise meritocracy for the benefit of competitive individuals, but is instead an end in itself that individuals need to support with correct investment choices. The consequence of this is that if individuals are unemployed or underemployed, it is due to a personal failure to make the right investment choice. Moreover, if the individual is unemployed or underemployed it is not just deemed a matter of personal failure but a matter of their supposed fecklessness harming all by undermining economic productivity. Failure to make the right human capital investment is deemed a moral failure by the customer who eschewed the information provided by the audit regime to pursue a whim, with this costing the economy as a whole.

As part of this narrative, the Conservatives clearly state that the economy needs more ‘STEM’ (science, technology, engineering and maths) graduates, and thus less humanities or social science graduates. To be an unemployed or underemployed sociology or philosophy graduate is thus, with the Conservatives’ view, to be a feckless consumer. Despite the emphasis on objectivity in STEM subjects, it is ironic to note that the Conservative’s case about a lack of STEM graduates undermining economic performances rests on a problematic use of tiny literature and ignores the fact that the subject with the highest unemployment rate is computer science (Cruickshank 2016). It is also worth noting that many MPs and leading figures in journalism and broadcasting studied PPE (philosophy, politics and economics) at Oxford having attending expensive elite public schools.

An increasingly punitive approach, which from an economic point of view is dysfunctional, is now being pursued against individuals deemed to have failed in their moral duty to serve the economy. Contrary to the liberal fear of dogmatism stemming from normative commitments to ends, the Conservatives (and many in Labour too), hold that the end justifies the means, and so the end of protecting the economy—which is an end in itself—is taken to justify means that undermine the economy. If the Party want 2 + 2 = 5 it will obtain.

William Davies, in a recent article in the New Left Review, explored how the latest phase of neoliberalism engages in increasingly severe punishments for being unemployed. For Davies (2016), what he terms ‘neoliberalism 3.0’ (following earlier phases of establishing and then normalising neoliberalism), is defined by its vengeance against those deemed to have failed. Policies such as ‘sanctioning’ welfare claimants (that is, removing benefit payments for a period of weeks or months) for trivial problems, such as arriving 5 minutes late to an interview with a welfare bureaucrat, even if the lateness was not their fault, do nothing to increase economic productivity but are relentlessly pursued. Such policies prevent people from entering the labour market, because the removal of benefits creates severe stress and requires time to access foodbanks and appeals processes, in place of job hunting and being well enough to attend job-interviews. Nonetheless, sanctions are continually being imposed as extremely punitive punishments to make life far worse for those already experiencing hardship. As Davies puts it:

In contrast to the offence against socialism [in the 1980s], the ‘enemies’ targeted now are largely disempowered and internal to the neoliberal system itself. In some instances, such as those crippled by poverty, debt and collapsing social-safety nets, they have already been largely destroyed as an autonomous political force. Yet somehow this increases the urge to punish them further (2016, 132).

Conservative rhetoric sought to demonise those receiving benefits, defining them as the ‘shirkers’ who get housing and money given to them by the welfare state as a reward for fecklessness, which punished the ‘hard working strivers’ who ‘got up early to see the curtains still closed in the house of the shirker claimants’ who they supported with their taxes. Working people were encouraged to feel nothing but resentment and hate towards the unemployed by the Conservatives.

Let’s explore two tensions in contemporary neoliberalism. First, there is the tension between technocracy and affect. On the one hand, neoliberals seek to reduce normative political questions about reforms to ‘value-neutral’ / technocratic questions about regulating objective market forces. Critics are quick to point out that neoliberalism is itself a normative position, with a value driven commitment to corporate capitalism being facilitated by state policies and spending, contrary to the anti-interventionist / free market rhetoric (see for instance: Davies 2014; Van Horn and Mirowski 2009). One example of this is the rise of private prisons in the US. Here in the UK, the state has been tendering out NHS services to corporations and the Conservatives hoped to reconstruct the market in higher education to facilitate for-profits. All of which means that neoliberalism is another form of interventionism (Cruickshank 2016). Furthermore, any notion of market forces ever being objective sui generis forces is erroneous given that they always already presume a legal and political framework, and certain sets of social expectations about contractual relations and the importance of work to define selfhood in modernity, etc. On the other hand, the claim about the need for politics to be reduced to the technocratic administration of objective market forces sits alongside the state constructing imaginaries that are meant to generate emotional and even visceral appeal.

Individuals are encouraged not only to resent and hate those classed as moral failures who failed to serve the economy, but to recognise their moral responsibility to serve the economy and be happy. Individuals need to be happy so as to be ever more efficient at work. Happiness is meant to increase despite the increase in job insecurity with the rise in temporary contracts and the use of self-employed contractors replacing salaried staff (for discussion see for instance: the debates in Cruickshank and Sassower 2017; Davies 2014, 2015, 2016). An affective hierarchy is sought whereby ‘winners’ for the moment despise ‘losers’ and feel happy to fulfil their moral duty as winners to serve the economy, while also feeling ever more insecurity which cannot be allowed to turn into anxiety and depression, for that may result in a winner becoming a despised loser. People are to be broken up into discrete bits, with insecurity boxed off from happiness.

Second, the political statements and policies from the Conservatives are contradictory, with the economy being a meritocratic means to serve individuals, an end in itself, and an end that is to be protected by attacking ‘failures’ in a way that undermines the economy. From a technocratic point of view, the punitive policies are problematic and contradict the notion of objectively managing ‘market forces’. However, neoliberalism is not just normative, rather than value-neutral, but affective too. This means that markets are not expressions of ‘human nature’ but are engineered with the view that they ought to serve corporate interests and that people need to be affectively engineered to fit such markets. Such affective engineering means gearing up their emotions to make them want to be happy-efficient means for corporate profit making (as more productive employees), and making them reduce worth to financial worth, with losers seen as less-human / non-human / ‘worthless’ objects of hate.

Orwell was right to hold that controlling language helps control thought. More than this though, demonising language and punitive policies can combine to control thought. Control here can be manufactured by not only seeking to preclude criticism of the state’s treatment of people, by setting the terms of reference in an argot of morally correct winners and people who choose moral-economic failure, but also by removing any affective motivation to see those demonised as people in need of ethical-political defence from punitive policies. Preventing some people from entering the labour market may undermine the espoused focus on pure market efficiency but it will not damage corporate profit making, which always has a ready supply of labour, and does allow for more effective corporate plutocracy through affective divide and rule.

The normative end of serving the corporate economy is served by creating affective hierarchies to preclude unity and make people fearfully seek happiness. One way of thinking about this is to see it in terms of a cost—benefit analysis, where the small cost of undermining the employment potential of those seeking work is outweighed by the benefit of undermining protest by presenting the losers in the game rigged for corporate victory as objects of hate beyond dialogue and recognition as fellow democratic subjects.

People would seem to have to live in a state of severe double-think, embracing the moral injunction to be happy to serve the economy whilst hating those moral and economic failures who failed to serve the economy, in a condition of increasing precarity in the labour market. Such a condition has to entail quite a high degree of cognitive dissonance. All of this is rendered feasible by a process of reification. People defined by politicians and the right-wing press as voiceless moral failures who failed to serve the economy become demonised objects, with their existence as subjects that have complex histories in difficult times being occluded, while government policies reduce welfare and serve corporate profit-making. The process of reification does not stop there.

Other workers come to be perceived as threatening objects, in a ceaseless competition. And, thanks to social media, now complemented by the drive to require happiness to be efficient at work, other employees / colleagues and even friends are seen as threatening objects reduced to their expressions of happiness: lives are reduced to discrete representations of happiness via uploaded photos on ‘Facebook’ and their ‘likes’, and the presentation of the happy-efficient self, using technology to self-quantify exercise to maximise happiness and efficiency, etc. Ultimately those deemed to be winners become reified too for they are not ends in themselves but defined as of worth solely as a means for the economy to prosper.

As a cog in a machine they have value and just as a broken cog is worthless junk because it has no value in itself, so too people that cease to be deemed of use to the economy are deemed worthless. This reification can enable the fragmented self to continue in a less than secure environment and to accept Conservative policy as an affective whole even if it has practical contradictions, for the reified self would take the statements and policies of the Conservatives at face value, rather than seek out contradictions. The only way to avoid being demonised is to be validated as a happy and efficient employee, defining oneself purely as a means, and not questioning the source of validation. The state becomes an ethical authority, rolling back the self and making the private self contingent on public politics and the corporate interests behind this. In all of this the economy too becomes reified, with the human relations and exploitation involved being occluded, as the economy becomes an ethical object, which the state serves, gaining its ethical authority from serving this object of veneration.

While liberals reject the charge from Marxists and anarchists that the state serves the economy, neoliberalism, especially in its 3.0 punitive form, does come to present the state as deriving its legitimacy from ethically serving the economy, although in this form, the actual economic relations between corporations and politics is obscured. Accepting the ethical authority of the state means the reified winner can self-perceive not as a subject with affective bonds and socio-economic similarities to other subjects, but as an object cut off from other threatening objects or objects to despise, with happiness and worth gained from its conformity to the demands of the economy.

Naturalising Hierarchy

Recently (10th April 2017) BBC Radio 4 broadcast a documentary called ‘The Rise and the Fall of the Meritocracy’ which sought to assess the contemporary relevance of Michael Young’s book. The programme was written and hosted by Toby Young—Michael Young’s son—who writes for the ‘Spectator’ (which he also co-edits), the ‘Telegraph’ and the ‘Daily Mail’, all of which are right-wing publications. Before exploring Toby Young’s argument, it is useful to situate his approach to meritocracy by sketching out how liberalism has sought to justify the existence of chronic poverty alongside capitalism creating enormous wealth for a few.

For liberals, liberalism is legitimate because it affords equality of opportunity. The existence of chronic poverty and unemployment throughout the history of liberal capitalism therefore raises a difficult issue for liberals. For if prejudice precludes people getting jobs, or if the economy systematically fails to produce sufficient job opportunities, then the legitimacy of liberalism is heavily compromised or negated. Nineteenth century liberals dealt with this by holding that there was a biologically defective underclass given to sloth, crime, addiction and sexual irresponsibility, and unable to face the discipline of work. While philanthropy was to be extended to the ‘respectable working class’ who would work hard, the ‘unrespectable working class’ (the underclass), were to be denied charity and also even subject to sterilisation (indeed, sterilisation policies continued well into the twentieth century). Why a legitimate system for distributing wealth meant that the ‘respectable working class’ needed charity in addition to work was an issue left unaddressed.

For later neoliberal politicians and commentators, from the 1970s and 1980s onwards, the answer to the question as to why chronic poverty existed was that a deviant underclass subculture had been created by the welfare state, which socialised children into welfare dependency. Here a supposed lack of discipline in the home, caused by a lack of a working father and children being raised by a mother on welfare, was taken to lead to educational and then employment failure, the pursuit of immediate gratification with drugs, alcohol and sex, and crime to pay for the drugs and alcohol. Concepts of the underclass are used to reinforce patriarchy as well as the class structure. In the UK, the Conservatives argued that the state and especially the welfare state, had to be ‘rolled back’ to undermine the development of a ‘something for nothing culture’ whereby people wanted welfare in place or working.

Charles Murray is the main ideologue for the neoliberal policy of removing the welfare state. Murray argued in Losing Ground (1984) that people in the US used to work their way up from very low paid jobs to jobs with better incomes until well-meaning policies made it more economically worthwhile to claim benefits in place of work. Murray used a thought-experiment to discuss this and his claims about the real value of welfare increasing are disputed (Wilson 1990). For Murray, those choosing welfare over work were just as rational as other individuals, because they were just responding to external economic stimuli. Although Murray does not explicitly discuss rational choice theory (RCT), his position is a form of RCT and as critics of RCT hold, it is determinist, which would remove any normative component from the theory, contrary to his intentions. People are seen, in effect, as automata that react to positive and negative reinforcement stimuli, with the former being those that enable the most efficient way to realise ‘utility’, or material self-interest.

Murray’s definition of the initial choice for welfare over work as being as rational as other individuals’ behaviours was though disingenuous because Murray wanted to put the blame on reformers for not understanding how humans were motivated, so as to then argue that reformers had created an underclass that was welfare-dependent, criminal and intrinsically less able that other people. Do-gooder reformers failed to understand human nature, for Murray. One group he despised were called normal so as to strengthen his critique of another group he despised. In his highly controversial book ‘The Bell Curve: Intelligence and Class Structure in American Life’ (1994), co-authored with Herrnstein, the argument was made that innate intelligence rather than parental social class or environmental factors were the best predictor for economic success or failure. Murray and Herrnstein also argued that ‘racial’ differences were to be accounted for in terms of differences in intelligence with the authors also trying to avoid controversy by putting in the caveat that environmental factors may play a role as well. Murray also came to the UK and wrote in ‘The Emerging British Underclass’ (1990) about a deviant sub-culture in the UK socialising children into welfare dependency and crime, in place of any tacit reference to RCT.

Unsurprisingly sociologists have rejected the concept of the underclass in general and Murray’s arguments in particular. For most sociologists, the concept of an underclass is an ideological and not a social scientific concept, because there is no empirical basis to hold that the cause of poverty is a deviant sub-culture caused by welfare dependency. This concept homogenises people into a category that has been developed specifically to demonise them. Against the view that chronic unemployment and poverty are a result of welfare dependency, it is often argued that the deindustrialisation that started in the 1980s which created major structural unemployment is the main cause of contemporary chromic poverty (see MacDonald and Marsh 2005 for a good discussion of these issues).

Toby Young began his programme by presenting the votes for Trump and Brexit as populist revolt against elites. Michael Sandel was then interviewed, and he spoke of the ‘meritocratic hubris’, whereby the rich and powerful smugly present themselves as deserving winners and the rest, implicitly at least, as losers. For Young, the recent populist revolt was to be seen as similar to that envisioned by his father, with the masses reacting to the meritocratic hubris of the economic elite. Presenting the vote for Brexit in such a way is problematic though because seeing these events as a populist reaction to elites by those resenting their implicit or explicit classification as losers, misses the point that many who voted for Brexit were not struggling financially but were older, more affluent voters in the south of England—traditional ‘Tory voters’ in Conservative safe seats (Hennig and Dorling 2016).

Such voters were responding to hierarchies but were not seeking to challenge the existing hierarchy but reinforce it, by ‘taking back control’ of UK borders by keeping immigrants and migrants out. Prior to the referendum for Brexit, the Conservatives had done much to inscribe a neoconservative imaginary that presented Muslims as an internal threat and immigrants, migrants and refugees as an external threat. The Conservatives had been aided in this by the right-wing tabloids, especially the Daily Mail, which Toby Young writes for. It is odd that someone discussing rule by a cognitive elite defines a populist reaction against elites in terms of more wealthy voters influenced by a paper he writes for supporting a policy many in the Conservative party championed.

Again, we can speak of reification, for while those politicians supporting the vote for Brexit, and the tabloid press, presented people from outside the UK as threatening objects, the ‘left-liberal’ media reacted by presenting immigrants and migrants as objects of use for the economy, or as refugee objects of pity. The terms of reference on all sides of the debate set up a dualism between a national subject within the national borders and external objects beyond those borders, with the main debates then being whether those objects were a threat or not, or of use or not. And of course, as a narrative device in the programme, those deemed losers in a competitive, meritocratic, society were reduced to being threatening objects. Economic losers became demonised as a potentially threatening enemy within, following Conservative rhetoric in the 1980s about the victims of deindustrialisation in the north of England being enemies within (for discussion of this see Bruff 2014; Cruickshank and Sassower 2017; Hall 1983). The question posed by the programme became how can we justify the unequal outcomes of a meritocracy and deal the threat of violence from the losers, defined as an homogenous mass of people lacking ability and intelligence? All of which was an unacknowledged return to Thatcher’s ‘authoritarian populism’ that demonised the northern industrial unionised working class who needed to be defeated to move to a post-industrial deunionised low pay service sector neoliberalism (Hall 1983). Against these, Thatcher sought to mobilise support from those, including those in the southern working class, who identified as ‘middle class’, aided in this by the selling off of social housing to tenants, and selling of the nationalised industries with people encouraged to buy shares. Divide and rule with the winners despising the losers was the name of the game as Thatcher began the process of creating a welfare state for the rich, through tax policy and undermining benefits.

After using Sandel to pose the problem of populist revolt, Young then interviewed Peter Saunders (a controversial right wing sociologist who had argued that private property ownership was ‘natural’), Charles Murray, Rebecca Allen (an economics academic currently running an education think-tank called ‘Education Datalab’), Oliver James (a psychologist) and Robert Plomin (a geneticist and expert on intelligence). With the token exception of James, all of these people supported the idea that socio-economic success was about half due to nature, meaning inherited intelligence from intelligent and rich parents, and half due to nurture, with the latter being connected to the former, due to successful parents creating the most conducive environment for the children to succeed. After interviewing Allen, Saunders and Murray, none of whom are scientists, Young interviewed James, who criticised the lack of scientific evidence to show that intelligence was inherited, before returning to Allen, and then moving on to Plomin in an attempt to use scientific authority epistemically to underwrite and guarantee the claims of Saunders, Murray and Allen.

The case was made that a meritocracy had allowed for class mobility, but now most of the intelligent people were in the higher positions in society, and so the lack of social mobility more recently was not due to failures in equality of opportunity, but a lack of intrinsic ability in those remaining in the working class and lower middle class. Young then considered whether technology could be used by parents to enhance their offspring’s intelligence to make them more successful than their parents, with the (threatening objects) of ordinary people demanding the state provide this for them for free. Whether the future would be stable or not was left open, thus inviting the conclusion from those who supported his ideas that the state needed to be a strong law and order state, to tackle threats from a potential genetically inferior ‘enemy within’.

While Toby Young sees himself as a laissez-faire liberal, his position on meritocracy and class mobility is really post-liberal, in the sense that the core liberal principle of equality of opportunity becomes redundant, given that the hierarchy of wealth ends up reflecting a hierarchy in nature (the most innately intelligent at top) and a hierarchy in nurture based on nature (with the cognitive elite creating the best environment for the child to develop).

One question raised by this, is do the more intelligent deserve more money? For Sayer (2005), the answer is no because they benefit by realising their ability in meaningful work. For Allen, the answer was yes. She was clear, contrary to Conservative rhetoric about ‘strivers’, that successful people had not ‘worked harder’ than others (a claim hard to support anyway, given the pressures on many working people), but that their natural ability made them entitled to rewards. A meritocracy could thus exist without meaningful class mobility where those at the ‘top’ deserved economic fortunes and power not for being ‘strivers’ but for being naturally superior, with the question then becoming, how to deal with the losers defined by their lack. Yet all of this rested on a non sequitur, which is surprising given Allen’s claim to be cognitively superior to the majority of people. If it were the case that some were significantly more intelligent than others, with this being passed to their children, then there is no logical motivation to conclude that such people morally and legally deserve large houses, private education, expensive cars, pensions to make retirement more comfortable than most, etc., while others starve after being sanctioned because a bus broke down. To say ‘I have above average intelligence’ does not logically entail the conclusion ‘therefore I deserve more material rewards’.

One could try to argue that the more intelligent need a motivation to apply their intelligence but this trades on a theory of human nature as acquisitive, which is speculative and contingent on the rise of capitalism. It also overlooks the problem that given the choice of meaningful work or meaningless routine work for 40-50 years on the same pay many, I suspect, would opt for the former. We also need to question the use of science here. Popper (1959, 1963) argued that science is fallible and induction entails logical problems, so seeking to establish a claim to scientific certainty by talking of current scientific studies supporting a view (a position which James contests), is in itself erroneous.

Popper rejected all appeals to epistemic authority holding that there were no institutional sources of authority and no inner sources of authority such as the ‘authority of the senses’ for empiricism. Theories could be corroborated but never justified or verified, and all theories needed to be open to critical dialogue, so that they could be changed. If a self-defining elite defined a theory which they based their claim to superiority on as certain, and defined others as cognitively inferior and thus not worth entering dialogue with, science, politics and ethics would all go into major decline for Popper. We would have a post-liberal closed society.

A naturalised hierarchy would justify a plutocratic meritocracy with no class mobility and define the majority as useless and threatening objects, in place of an affective hierarchy where a plutocracy operates with people being told to define themselves as happy winners, unless they are unemployed. The former could well prompt discord but it would be probable, I imagine, that another affective hierarchy would be mobilised, which focused on nationalism rather than class. People could be told to accept their natural superiors and hate the threatening objects from different countries. One of the tabloids Toby Young writes for put a lot of effort into demonising migrants and continually demanding a xenophobic nationalism from its readers.

Critical Pedagogy contra Meritocracy

Calls for more of a meritocracy to make society ‘fairer’ are popular in politics and the press but misguided for two reasons. First, the concept of meritocracy is used to legitimise liberalism, when the reality is one of the rich getting richer with the notion of equality of opportunity being out of kilter with reality in capitalism (Harvey 2005). Second, appeals to meritocracy are used to either support an affective hierarchy, or to support a naturalised hierarchy. With the former, people are reified to see themselves as happy successful objects. On the one hand, their worth is to be derived by them and others from their ability to serve the economy as an end in itself. On the other hand, the economy is presented as a means to serve individuals, with meritocratic competition rewarding the most able. Those with merit are to see themselves as winners and to despise losers as losers and as people letting the economy down. The tension is obscured by the state, presenting itself as drawing its ethical authority from serving the national economy, which focuses on individual reward for winners, with national economic gain being a by-product of this, and collective punishment for losers, with stigmatising language and punitive and sadistic policies. Winners though lose their individuality to become objects of use to the national economy, with this requiring them to obey the injunction to be happy to be of greater use to the national economy, and losers lose their individuality to become despised objects of hate, to be punished irrespective of their individual complex histories. With the turn to a naturalised hierarchy, nationalism would be intensified, with the likely construction of a xenophobic nationalism.

This is not to support an early Frankfurt School pessimism, as espoused by Adorno and Horkheimer, for the process of reification is not totalising. People are protesting because of hardship and unfairness to others, in the UK and in the US against Trump, for instance. What I will argue here is that critical pedagogy offers a way out of the meritocratic morass. Freire’s (1993) work on critical pedagogy applies to all forms of education, from schools, colleges and universities to political education. Freire saw education as intrinsically political, not just in terms of the content, but in terms of the structure too. He famously rejected what he termed the ‘banking approach’ to education, which was where an authority figure deposited discrete pieces of information in passive learners. The consequence of such an approach was to reinscribe existing hierarchies based on claims to authority. Freire argues that:

the teacher confuses the authority of knowledge with his or her own professional authority, which she and he sets in opposition to the freedom of the students [… and] the teacher is the Subject of the learning process while the students are the mere objects. […] The capability of banking education to minimize or annul the students’ creative power and to stimulate their credulity serves the interests of the oppressors, who care neither to have the world revealed nor to see it transformed (1993, 54).

The banking approach thus entails reification with learners being defined as—and self-defining as—passive objects. These objects are only of value when accepting and serving subjects, who have agency and authority. This didactic hierarchy then serves to legitimise the existing social and political hierarchies, because it trains people to define themselves as passive objects whose only worth is defined in relation to serving authority. As Freire puts it:

More and more the oppressors are using science and technology as unquestionably powerful instruments for their purpose: the maintenance of the oppressive order through manipulation and repression. The oppressed, as objects, as ‘things’, have no purposes except those their oppressors prescribe for them (1993, 42).

The outcome of the banking approach to education is that people’s minds become ‘colonised’ by the oppressors.

Here we can say that attempts to realise meritocracy, where some people from the working class are allowed to move into middle class positions, will entail the banking conception with private and state education being based on pupils accepting the authority of the teacher because of their institutional position. The rise in audit culture as a government controlled proxy for market signals with neoliberal interventionism, where the state constructs and controls the market (Cruickshank 2016; Mirowski 2011; Van Horn and Mirowski 2009), exacerbated this problem. For this means that teachers had to teach to the test, ensuring pupils remember and regurgitate factoids that are then forgotten. Education does not encourage a love of learning and a way to develop oneself but turns pupils into industrial objects processing words to get a number on a piece of paper. Seeking a meritocracy in such circumstance would just entail colonised objects moving on to assume positions in the middle classes where they remain colonised and where they act to help colonise those below them, issuing orders for people perceived as objects below them.

Citing Fromm, Freire argues that the oppressor consciousness can only understand itself through possession and it needs to possess other people as objects to not ‘lose contact with the world’ (1993, 40). In this, those colonised and rewarded as conforming objects, through ‘meritocracy’, which selects a small number of working class children for middle class jobs, can see others as objects that confirm their status as superior, without realising the whole process dehumanises them. They will feel rewarded as objects unaware of their own reification by feeling affirmed through, if not the possession of others, then at least the control of others as objects.

Against this, Freire argues that people cannot liberate themselves or be liberated by a new leader seeking authority over them, but can be liberated through working with others, to gain subjecthood through a sense of collective agency. Such agency would have to be dialogic, with people learning together and no-one acting as a new coloniser. The banking approach has to be avoided by radicals for its use makes them oppressors.

While schools are characterised by the banking approach to education there is still some scope, despite neoliberal audit culture, for critical dialogic engagement in universities, especially when students and academics work with political groups outside the university, and of course, there is scope for dialogic engagement between groups of lay agents experiencing socio-economic problems. With this approach to learning all people are treated as subjects and not objects so the problem of reification is removed. The structure of education—and pseudo-dialogue—which reduces people to objects by making them passive things acted upon by an elite claiming authority, is rejected for its intrinsically oppressive nature. A horizontal approach to learning, where dialogic subjects learn from and with other dialogic subjects can begin a move to challenge neoliberalism, the power of corporations and the state serving them.

Defining education and employment in terms of meritocratic selection serves to hide the way liberal capitalism entails the rich getting richer and having control over institutional politics. It also, as importantly, undermines radically critical dialogue by not only ‘blaming the victim’ as regards poverty, but defining all as objects, which can preclude the possibility of recognising others as dialogic subjects. The recent call for a naturalised post-liberal hierarchy suggests that the concentration of resources and opportunities in the hands of a few may make such meritocratic legitimising difficult to sustain, with this opening up a possible turn to authoritarianism. Appeals for a meritocracy and the reification entailed by this need replacing by an approach that can foster dialogue between subjects, which requires the rejection of dialogue-stopping appeals to sources of authority that ultimately entail reification.

References

Britton, Jack, Lorraine Dearden, Neil Sheppard, and Anna Vignoles. ‘What and Where You Study Matter for Graduate Earnings—but so does Parental Wealth.’ Institute for Fiscal Studies. 13 April 2016. http://www.ifs.org.uk/publications/8235. [Accessed 13/06/ 2016].

Brown, Wendy. Undoing the Demos: Neoliberalism’s Stealth Revolution. Brooklyn (NY.): Zone Books, 2015.

Bruff, Ian. ‘The Rise of Authoritarian Neoliberalism’, Rethinking Marxism 26 no. 1 (2014): 113-129.

Cheng, Jacqueline. H. S. and Marsh, Herbert. W. ‘National Student Survey are Differences between Universities and Courses Reliable and Meaningful?’, Oxford Review of Education 36 no. 6 (2010): 693-712.

Collini, Stefan. What are Universities for? London: Penguin, 2012.

Collini, Stefan. Speaking of Universities. London: Verso, 2017.

Cruickshank, Justin. ‘Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practices’. Social Epistemology 29, no. 1 (2015): 73-94.

Cruickshank, Justin. ‘Putting Business at the Heart of Higher Education: On Neoliberal Interventionism and Audit Culture in UK Universities’, Open Library Of Humanities (special issue: ‘The Abolition Of The University’), edited by L. Dear (Glasgow) and M. Eve (Birkbeck), 2 no. 1 (2016): 1-33. https://olh.openlibhums.org/articles/10.16995/olh.77/. Accessed 21/04/2017.

Cruickshank, Justin and Raphael Sassower. Democratic Problem-Solving: Dialogues in Social Epistemology. London: Rowman and Littlefield International, 2017.

Davies, William. The Limits of Neoliberalism: Authority, Sovereignty and the Logic of Competition. London: Sage, 2014.

Davies, William. The Happiness Industry: how the Government and Big Business sold us well-being. London: Bloomsbury, 2015.

Davies, William. ‘The New Neoliberalism’, New Left Review 101 series II (2016): 121-134.

Friedman, Gerald. ‘Workers without Employers: Shadow Corporations and the Rise of the Gig Economy.’ Review of Keynesian Economics 2 no. 2 (2014): 171-188.

Freire, Paulo. Pedagogy of the Oppressed. London: Penguin, 1993 [1970].

Hall, Stuart. ‘The Great Moving Right Show’. In The Politics of Thatcherism, edited by Stuart Hall and Martin Jacques, 19-39. London: Lawrence and Wishart, 1983.

Harvey, David. A Brief History of Neoliberalism. Oxford: Oxford University Press, 2005.

Hennig, Benjamin D. and Danny Dorling. “The EU Referendum.” Political Insight 7, no. 2 (2016): 20-21. http://pli.sagepub.com/content/7/2/20.full. Accessed 18/09/2016.

Higher Education Funding Council for England (HEFCE). 2001. ‘Report 01/55: Information on Quality and Standards in Teaching and Learning.’ http://web.archive.org/web/20040224230748/http://www.hefce.ac.uk/pubs/hefce/2001/01_66.htm. [Accessed 12/12/2015].

Holmwood, John. ‘The Idea of a Public University’, in A Manifesto for the Public University, edited by John Holmwood, 12-26. London: Bloomsbury, 2011.

Holmwood, John. ‘Slouching Toward the Market: The New Green Paper for Higher Education Part 1.’ Campaign for the Public University. 8 Nov. 2015a. http://publicuniversity.org.uk/2015/11/08/slouching-toward-the-market-the-new-green-paper-for-higher-education-part-i/. [Accessed 01/12/2015].

Holmwood, John. ‘Slouching Toward the Market: The New Green Paper for Higher Education Part 2.’ Campaign for the Public University. 8 Nov. 2015b. http://publicuniversity.org.uk/2015/11/08/slouching-toward-the-market-the-new-green-paper-for-higher-education-part-ii/. [Accessed 01/12/2015].

Holmwood, John, Tom Hickey, Rachel Cohen, and Sean Wallis,  (eds). ‘The Alternative White Paper for Higher Education. In Defence of Public Higher Education: Knowledge for a Successful Society. A Response to “Success as a Knowledge Economy”, BIS (2016)’. London: Convention for Higher Education, 2016. https://heconvention2.wordpress.com/alternative/.

MacDonald, Robert and Julie Marsh. Disconnected Youth? Growing up in Britain’s Poor Neighbourhoods. Basingstoke: Palgrave, 2005.

McGettigan, Andrew. The Great University Gamble: Money, Markets and the Future of Higher Education. London: Pluto, 2013.

Mirowski, Philip. Science-Mart: Privatizing American Science. Cambridge and London: Harvard University Press, 2011.

Murray, Charles. Losing Ground: American Social Policy 1950-1980. New York: Basic Books, 1984.

Murray, Charles. The Emerging British Underclass (Studies in Welfare Series no. 2). London: Health and Welfare Unit, Institute of Economic Affairs, 1990.

Murray, Charles and Herrnstein, Richard, J. The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press, 1994.

Popper, Karl. R. The Logic of Scientific Discovery. New York: Harper & Row, 1959 [1934].

Popper, Karl. R. Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge, 1963.

Preston, Barbara. ‘State School Kids do Better at Uni’, The Conversation 16 July 2014. https://theconversation.com/state-school-kids-do-better-at-uni-29155. [Accessed 27/04/2017].

Sayer, Andrew. The Moral Significance of Class. Cambridge: Cambridge University Press, 2005.

Van Horn, Robert and Philip Mirowski. ‘The Rise of the Chicago School of Economics and the Birth of Neoliberalism.’ In The Road from Mont Pelerin: The Making of the Neoliberal Thought Collective, edited by Philip Mirowski and Dieter Plehwe, 139-187. London: Harvard University Press. 2009.

Weaver, Matthew. ‘Poshness Tests’ Block Working Class Applicants at Top Companies.’ Guardian 15 June 2015. https://www.theguardian.com/society/2015/jun/15/poshness-tests-block-working-class-applicants-at-top-companies. [Accessed 10/03/2016].

Wilson, William, J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy. Chicago: Chicago University Press, 1990.

Young, Michael. The Rise of the Meritocracy. Piscataway (NJ.): Transaction Publishers, 1994 [1958].

Young, Michael. ‘Down with Meritocracy’, Guardian 29 June 2001.https://www.theguardian.com/politics/2001/jun/29/comment. [Accessed 02/05/17].

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick. His latest book is The Academic Caesar: University Leadership is Hard (Sage).

Shortlink: http://wp.me/p1Bfg0-3yV

Note: The following piece appeared under the title of ‘Free speech is not just for academics’ in the 27 April 2017 issue of Times Higher Education and is reprinted here with permission from the publisher.

Image credit: barnyz, via flickr

Is free speech an academic value? We might think that the self-evident answer is yes. Isn’t that why “No platforming” controversial figures usually leave the campus involved with egg on its face, amid scathing headlines about political correctness gone mad?

However, a completely different argument can be made against universities’ need to defend free speech that bears no taint of political correctness. It is what I call the “Little Academia” argument. It plays on the academic impulse to retreat to a parochial sense of self-interest in the face of external pressures.

The master of this argument for the last 30 years has been Stanley Fish, the American postmodern literary critic. Fish became notorious in the 1980s for arguing that a text means whatever its community of readers thinks it means. This seemed wildly radical, but it quickly became clear – at least to more discerning readers – that Fish’s communities were gated.

This seems to be Fish’s view of the university more generally. In a recent article in the US Chronicle of Higher Education,Free Speech Is Not an Academic Value”, written in response to the student protests at Middlebury College against the presence of Charles Murray, a political economist who takes race seriously as a variable in assessing public policies, Fish criticised the college’s administrators for thinking of themselves as “free-speech champions”. This, he said, represented a failure to observe the distinction between students’ curricular and extracurricular activities. Regarding the latter, he said, administrators’ correct role was merely as “managers of crowd control”.

In other words, a university is a gated community designed to protect the freedom only of those who wish to pursue discipline-based inquiries: namely, professional academics. Students only benefit when they behave as apprentice professional academics. They are generously permitted to organise extracurricular activities, but the university’s official attitude towards these is neutral, as long as they do not disrupt the core business of the institution.

The basic problem with this picture is that it supposes that academic freedom is a more restricted case of generalised free expression. The undertow of Fish’s argument is that students are potentially freer to express themselves outside of campus.

To be sure, this may be how things look to Fish, who hails from a country that already had a Bill of Rights protecting free speech roughly a century before the concept of academic freedom was imported to unionise academics in the face of aggressive university governing boards. However, when Wilhelm von Humboldt invented the concept of academic freedom in early 19th century Germany, it was in a country that lacked generalised free expression. For him, the university was the crucible in which free expression might be forged as a general right in society. Successive generations engaged in the “freedom to teach” and the “freedom to learn”, the two becoming of equal and reciprocal importance.

On this view, freedom is the ultimate transferable skill embodied by the education process. The ideal received its definitive modern formulation in the sociologist Max Weber’s famous 1917 lecture to new graduate students, “Science as a Vocation”.

What is most striking about it to modern ears is his stress on the need for teachers to make space for learners in their classroom practice. This means resisting the temptation to impose their authority, which may only serve to disarm the student of any choice in what to believe. Teachers can declare and justify their own choice, but must also identify the scope for reasonable divergence.

After all, if academic research is doing its job, even the most seemingly settled fact may well be overturned in the fullness of time. Students need to be provided with some sense of how that might happen as part of their education to be free.

Being open about the pressure points in the orthodoxy is complicated because, in today’s academia, certain heterodoxies can turn into their own micro-orthodoxies through dedicated degree programmes and journals. These have become the lightning rods for debates about political correctness.

Nevertheless, the bottom line is clear. Fish is wrong. Academic freedom is not just for professional academics but for students as well. The honourable tradition of independent student reading groups and speaker programmes already testifies to this. And in some contexts they can count towards satisfying formal degree requirements. Contra Little Academia, the “extra” in extracurricular should be read as intending to enhance a curriculum that academics themselves admit is neither complete nor perfect.

Of course, students may not handle extracurricular events well. But that is not about some non-academic thing called ‘crowd control’. It is simply an expression of the growth pains of students learning to be free.

Author Information: Adam Riggio, New Democratic Party of Canada, adamriggio@gmail.com

Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 66-73.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3vZ

Image credit: Cornerhouse, via flickr

Note: Several of the links in this article are to websites featuring alt-right news and commentary. This exists both as a warning for offensive content, as well as a sign of precisely how offensive the content we are dealing with actually is.

An important purpose of philosophical writing for public service is to prevent important ideas from slipping into empty buzzwords. You can give a superficial answer to the meaning of living in a “post-truth” world or discourse, but the most useful way to engage this question is to make it a starting point for a larger investigation into the major political and philosophical currents of our time. Post-truth was one of the many ideas American letters haemorrhaged in the maelstrom of Trumpism’s wake, the one seemingly most relevant to the concerns of social epistemology.

It is not enough simply to say that the American government’s communications have become propagandistic, or that the Trump Administration justifies its policies with lies. This is true, but trivial. We can learn much more from philosophical analysis. In public discourse, the stability of what information, facts, and principles are generally understood to be true has been eroding. General agreement on which sources of information are genuinely reliable in their truthfulness and trustworthiness has destabilized and diverged. This essay explores one philosophical hypothesis as to how that happened: through a sustained popular movement of subversion – subversion of consensus values, of reliability norms about information sources, and of who can legitimately claim the virtues of subversion itself. The drive to speak truth to power is today co-opted to punch down at the relatively powerless. This essay is a philosophical examination of how that happens.

Subversion as a Value and an Act

A central virtue in contemporary democracy is subversion. To be a subversive is to progress society against conservative, oppressive forces. It is to commit acts that transgress popular morality while providing a simultaneous critique of it. As new communities form in a society, or as previously oppressed communities push for equal status and rights, subversion calls attention to the inadequacy of currently mainstream morality to the new demands of this social development. Subversive acts can be publications, artistic works, protests, or even the slow process of conducting your own life publicly in a manner that transgresses mainstream social norms and preconceptions about what it is right to do.

Values of subversiveness are, therefore, politically progressive in their essence. The goal of subversion values is to destabilize an oppressive culture and its institutions of authority, in the name of greater inclusiveness and freedom. This is clear when we consider the popular paradigm case of subversive values: punk rock and punk culture. In the original punk and new wave scenes of 1970s New York and Britain, we can see subversion values in action. Punk’s embrace of BDSM and drag aesthetics subvert the niceties of respectable fashion. British punk’s embrace of reggae music promotes solidarity with people oppressed by racist and colonialist norms. Most obviously, punk enshrined a morality of musical composition through simplicity, jamming, and enthusiasm. All these acts and styles subverted popular values that suppressed all but vanilla hetero sexualities, marginalized immigrant groups and ethnic minorities, denigrated the poor, and esteemed an erudite musical aesthetic.

American nationalist conservatism today has adopted the form and rhetoric of subversion values, if not the content. The decadent, oppressive mainstream the modern alt-right opposes and subverts is a general consensus of liberal values – equal rights regardless of race or gender, an imperative to build a fair economy for all citizens, end police oppressive of marginalized communities, and so on. Alt-right activists push for the return of segregation and even ethnic cleansing of Hispanics from the United States. Curtis Yarvin, the intellectual centre of America’s alt-right, openly calls for an end to democratic institutions and their replacement with government by a neo-cameralist state structure that replaces citizenship with shareholds and reduces all public administration and foreign policy to the aim of profit. Yet because these ideas are a radical front opposing a broadly liberal democratic mainstream culture, alt-right activists declare themselves punk. They claim subversiveness in their appropriation of punk fashion in apparel and hair, and their gleeful offensiveness to liberal sensibilities with their embrace of public bigotry.

Subversion Logics: The Vicious Paradox and Trolling

Alt-right discourse and aesthetic claim to have inherited subversion values because their activists oppose a liberal democratic mainstream whose presumptions include the existence of universal human rights and the encouragement of cultural, ethnic, and gender diversity throughout society. If subversion values are defined entirely according to the act of subverting any mainstream, then this is true. But this would decouple subversion values from democratic political thought. At question in this essay – and at this moment in human democratic civilization – is whether such decoupling is truly possible.

If subversion as an act is decoupled from democratic values, then we can understand it as the act of forcing an opponent into a vicious paradox. One counters an opponent by interpreting their position as implying a hypocritical or self-contradictory logic. The most general such paradox is Karl Popper’s paradox of tolerance. Alt-right discourse frames their most bigoted communications as subversive acts of total free speech – an absolutism of freedom that decries as censorship any critique or opposition to what they say. This is true whether they write on a comment thread, through an anonymous Twitter feed, or on a stage at UC Berkeley. We are left with the apparent paradox that a democratic society must, if we are to respect our democratic values without being hypocrites ourselves, accept the rights of the most vile bigots to spread racism, misogyny, anti-trans and heterosexist ideas, Holocaust denial, and even the public release of their opponents’ private information. As Popper himself wrote, the only response to such an argument is to deny its validity – a democratic society cannot survive if it allows its citizens to argue and advocate for the end of democracy. The actual hypocritical stance is free speech absolutism: permitting assaults on democratic society and values in the name of democracy itself.

Trolling, the chief rhetorical weapon of the alt-right, is another method of subversion, turning an opponent’s actions against herself. To troll is to communicate with statements so dripping in irony that an opponent’s own opposition can be turned against itself. In a simple sense, this is the subversion of insults into badges of honour and vice versa. Witness how alt-right trolls refer to themselves as shitlords, or denounce ‘social justice warriors’ as true fascists. But trolling also includes a more complex rhetorical strategy. For example, one posts a violent, sexist, or racist meme – say, Barack Obama as a witch doctor giving Brianna Wu a lethal injection. If you criticize the post, they respond that they were merely trying to bait you, and mock you as a fragile fool who takes people seriously when they are not – a snowflake. You are now ashamed, having fallen into their trap of baiting earnest liberals into believing in the sincerity of their racism, so you encourage people to dismiss such posts as ‘mere trolling.’ This allows for a massive proliferation of racist, misogynist, anti-democratic ideas under the cover of being ‘mere trolling’ or just ‘for the lulz.’

No matter the content of the ideology that informs a subversive act, any subversive rhetoric challenges truth. Straightforwardly, subversion challenges what a preponderant majority of a society takes to be true. It is an attack on common sense, on a society’s truisms, on that which is taken for granted. In such a subversive social movement, the agents of subversion attack common sense truisms because of their conviction that the popular truisms are, in fact, false, and their own perspective is true, or at least acknowledges more profound and important truths than what they attack. As we tell ourselves the stories of our democratic history, the content of those subversions were actually true. Now that the loudest voices in American politics claiming to be virtuous subversives support nationalist, racist, anti-democratic ideologies, we must confront the possibility that those who speak truth to power have a much more complicated relationship with facts than we often believe.

Fake News as Simply Lies

Fake news is the central signpost of what is popularly called the ‘post-truth’ era, but it quickly became a catch-all term that refers to too many disparate phenomena to be useful. When preparing for this series of articles, we at the Reply Collective discussed the influence of post-modern thinkers on contemporary politics, particularly regarding climate change denialism. But I don’t consider contemporary fake news as having roots in these philosophies. The tradition is regarded in popular culture (and definitely in self-identified analytic philosophy communities) as destabilizing the possibility of truth, knowledge, and even factuality.

This conception is mistaken, as any attentive reading of Jacques Derrida, Michel Foucault, Gilles Deleuze, Jean-Francois Lyotard, or Jean Beaudrillard will reveal that they were concerned – at least on the question of knowledge and truth – with demonstrating that there were many more ways to understand how we justify our knowledge and the nature of facticity than any simple propositional definition in a Tarskian tradition can include. There are more ways to understand knowledge and truth than seeing whether and how a given state of affairs grounds the truth and truth-value of a description. A recent article by Steve Fuller at the Institute of Art and Ideas considers many concepts of truth throughout the history of philosophy more complicated than the popular idea of simple correspondence. So when we ask whether Trumpism has pushed us into a post-truth era, we must ask which concept of truth had become obsolete. Understanding what fake news is and can be, is one productive probe of this question.

So what are the major conceptions of ‘fake news’ that exist in Western media today? I ask this question with the knowledge that, given the rapid pace of political developments in the Trump era, my answers will probably be obsolete, or at least incomplete, by publication. The proliferation of meanings that I now describe happened in popular Western discourse in a mere two months from Election Day to Inauguration Day. My account of these conceptual shifts in popular discourse shows how these shifts of meaning have acquired such speed.

Fake news, as a political phenomenon, exists as one facet of a broad global political culture where the destabilization of what gets to count as a fact and how or why a proposition may be considered factual has become fully mainstream. As Bruno Latour has said, the destabilization of facticity’s foundation is rooted in the politics and epistemology of climate change denialism, the root of wider denialism of any real value for scientific knowledge. The centrepiece of petroleum industry public relations and global government lobbying efforts, climate change denialism was designed to undercut the legitimacy of international efforts to shift global industry away from petroleum reliance. Climate change denial conveniently aligns with the nationalist goals of Trump’s administration, since a denialist agenda requires attacking American loyalty to international emissions reduction treaties and United Nations environmental efforts. Denialism undercuts the legitimacy of scientific evidence for climate change by countering the efficacy of its practical epistemic truth-making function. It is denial and opposition all the way down. Ontologically, the truth-making functions of actual states of affairs on climatological statements remain as fine as they always were. What’s disappeared is the popular belief in the validity of those truth-makers.

So the function of ‘fake news’ as an accusation is to sever the truth-making powers of the targeted information source for as many people who hear the accusation as possible. The accusation is an attempt to deny and destroy a channel’s credibility as a source of true information. To achieve this, the accusation itself requires its own credibility for listeners. The term ‘fake news’ first applied to the flood of stories and memes flowing from a variety of dubious websites, consisting of uncorroborated and outright fabricated reports. The articles and images originated on websites based largely in Russia and Macedonia, then disseminated on Facebook pages like Occupy Democrats, Eagle Rising, and Freedom Daily, which make money using clickthrough-generating headlines and links. Much of the extreme white nationalist content of these pages came, in addition to the content mills of eastern Europe, from radical think tanks and lobby groups like the National Policy Institute. These feeds are a very literal definition of fake news: content written in the form of actual journalism so that their statements appear credible, but communicating blatant lies and falsehoods.

The feeds and pages disseminating these nonsensical stories were successful because the infrastructure of Facebook as a medium incentivizes comforting falsehoods over inconvenient truths. Its News Feed algorithm is largely a similarity-sorting process, pointing a user to sources that resemble what has been engaged before. Pages and websites that depend on by-clickthrough advertising revenue will therefore cater to already-existing user opinions to boost such engagement. A challenging idea that unsettles a user’s presumptions about the world will receive fewer clickthroughs because people tend to prefer hearing what they already agree with. The continuing aggregation of similarity after similarity reinforces your perspective and makes changing your mind even harder than it usually is.

Trolling Truth Itself

Donald Trump is an epically oversignified cultural figure. But in my case for the moment, I want to approach him as the most successful troll in contemporary culture. In his 11 January 2017 press conference, Trump angrily accused CNN and Buzzfeed of themselves being “fake news.” This proposition seems transparent, at first, as a clear act of trolling, a President’s subversive action against critical media outlets. Here, the insulting meaning of the term is retained, but its reference has shifted to cover the Trump-critical media organizations that first brought the term to ubiquity shortly after the 8 November 2016 election. The intention and meaning of the term has been turned against those who coined it.

In this context, the nature of the ‘post-truth’ era of politics appears simple. We are faced with two duelling conceptions of American politics and global social purpose. One is the Trump Administration, with its propositions about the danger of Islamist terror and the size of this year’s live Inauguration audience. The other is the usual collection of news outlets referred to as the mainstream media. Each gives a presentation of what is happening regarding a variety of topics, neither of which is compatible, both of which may be accurate to greater or lesser degrees in each instance. The simple issue is that the Trump Administration pushes easily falsified transparent propaganda such as the lie about an Islamist-led mass murder in Bowling Green, Kentucky. This simple issue becomes an intractable problem because significantly large spaces in the contemporary media economy constitutes a hardening of popular viewpoints into bubbles of self-reinforcing extremism. Thanks to Facebook’s sorting algorithms, there will likely always be a large group of Trumpists who will consider all his administration’s blatant lies to be truth.

This does not appear to be a problem for philosophy, but for public relations. We can solve this problem of the intractable audience for propaganda by finding or creating new paths to reach people in severely comforting information bubbles. There is a philosophical problem, but it is far more profound than even this practically difficult issue of outreach. The possibility conditions for the character of human society itself is the fundamental battlefield in the Trumpist era.

The accusation “You are fake news!” of Trump’s January press conference delivered a tactical subversion, rendering the original use of the term impossible. The moral aspects of this act of subversion appeared a few weeks later, in a 7 February interview Trump Administration communications official Sebastian Gorka did with Michael Medved. Gorka’s words first appear to be a straightforward instance of authoritarian delegitimizing of opposition, as he equates ‘fake news’ with opposition to President Trump. But Gorka goes beyond this simple gesture to contribute to a re-valuation of the values of subversion and opposition in our cultural discourse. He accuses Trump-critical news organizations of such a deep bias and hatred of President Trump and Trumpism that they themselves have failed to understand and perceive the world correctly. The mainstream media have become untrustworthy, says Gorka, not merely because many of their leaders and workers oppose President Trump, but because those people no longer understand the world as it is. That conclusion is, as Breitbart’s messaging would tell us, the reason to trust the mainstream media no longer is their genuine ignorance. And because it was a genuine mistake about the facts of the world, that accusation of ignorance and untrustworthiness is actually legitimate.

Real Failures of Knowledge

Donald Trump, as well as the political movements that backed his Presidential campaign and the anti-EU side of the Brexit referendum, knew something about the wider culture that many mainstream analysts and journalists did not: they knew that their victory was possible. This is not a matter of ideology, but a fact about the world. It is not a matter of interpretive understanding or political ideology like the symbolic meanings of a text, object, or gesture, but a matter of empirical knowledge. It is not a straightforward fact like the surface area of my apartment building’s front lawn or the number of Boeing aircraft owned by KLM. Discovering such a fact as the possibility conditions and likelihood of an election or referendum victory involving thousands of workers, billions of dollars of infrastructure and communications, and millions of people deliberating over their vote or refusal to vote is a massively complicated process. But it is still an empirical process and can be achieved to varying levels of success and failure. In the two most radical reversals of the West’s (neo)liberal democratic political programs in decades, the press as an institution failed to understand what is and is not possible.

Not only that, these organizations know they have failed, and know that their failure harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry. Through most of the 20th century, the news media ecology of the United States consisted of a hierarchy of local, regional, and inter/national newspapers. Community papers reported on local matters, these reports were among the sources for content at regional papers, and those regional papers in turn provided source material for America’s internationally-known newsrooms in the country’s major urban centres. This information ecology was the primary route not only for content, but for general knowledge of cultural developments beyond those few urban centres.

With the 21st century, it became customary to read local and national news online for free, causing sales and advertising revenue for those smaller newspapers to collapse. The ensuing decades saw most entry-level journalism work become casual and precarious, cutting off entry to the profession from those who did not have the inherited wealth to subsidize their first money-losing working years. So most poor and middle class people were cut off from work in journalism, removing their perspectives and positionality from the field’s knowledge production. The dominant newspaper culture that centred all content production in and around a local newsroom persisted into the internet era, forcing journalists to focus their home base in major cities. So investigation outside major cities rarely took place beyond parachute journalism, visits by reporters with little to no cultural familiarity with the region. This is a real failure of empirical knowledge gathering processes. Facing this failure, major metropolitan news organizations like the New York Times and Mic have begun building a network of regional bureaus throughout the now-neglected regions of America, where local independent journalists are hired as contractual workers to bring their lived experiences to national audiences.

America’s Democratic Party suffered a similar failure of knowledge, having been certain that the Trump campaign could never have breached the midwestern regions – Michigan, Wisconsin, Pennsylvania – that for decades have been strongholds of their support in Presidential elections. I leave aside the critical issue of voter suppression in these states to concentrate on a more epistemic aspect of Trump’s victory. This was the campaign’s unprecedented ability to craft messages with nuanced detail. Cambridge Analytica, the data analysis firm that worked for both Trump and leave.eu, provided the power to understand and target voter outreach with almost individual specificity. This firm derives incredibly complex and nuanced data sets from the Facebook behaviour of hundreds of millions of people, and is the most advanced microtargeting analytics company operating today. They were able to craft messages intricately tailored to individual viewers and deliver them through Facebook advertising. So the Trump campaign has a legitimate claim to have won based on superior knowledge of the details of the electorate and how best to reach and influence them.

Battles Over the Right to Truth

With this essay, I have attempted an investigation that is a blend of philosophy and journalism, an examination of epistemological aspects of dangerous and important contemporary political and social phenomena and trends. After such a mediation, I feel confident in proposing the following conclusions.

1) Trumpist propaganda justifies itself with an exclusive and correct claim to reliability as a source of knowledge: that the Trump campaign was the only major information source covering the American election that was always certain of the possibility that they could win. That all other media institutions at some point did not understand or accept the truth of Trump’s victory being possible makes them less reliable than the Trump team and Trump personally.

2) The denial of a claim’s legitimacy as truth, and of an institution’s fidelity to informing people of truths, has become such a powerful weapon of political rhetoric that it has ended all cross-partisan agreement on what sources of information about the wider world are reliable.

3) Because of the second conclusion, journalism has become an unreliable set of knowledge production techniques. The most reliable source of knowledge about that election was the analysis of mass data mining Facebook profiles, the ground of all Trump’s public outreach communications. Donald Trump became President of the United States with the most powerful quantitative sociology research program in human history.

4) This is Trumpism’s most powerful claim to the mantle of the true subversives of society, the virtuous rebel overthrowing a corrupt mainstream. Trumpism’s victory, which no one but Trumpists themselves thought possible, won the greatest achievement of any troll. Trumpism has argued its opponent into submission, humiliated them for the fact of having lost, then turned out to be right anyway.

The statistical analysis and mass data mining of Cambridge Analytica made Trump’s knowledge superior to that of the entire journalistic profession. So the best contribution that social epistemology as a field can make to understanding our moment is bringing all its cognitive and conceptual resources to an intense analysis of statistical knowledge production itself. We must understand its strengths and weaknesses – what statistical knowledge production emphasizes in the world and what escapes its ability to comprehend. Social epistemologists must ask themselves and each other: What does qualitative knowledge discover and allow us to do, that quantitative knowledge cannot? How can the qualitative form of knowledge uncover a truth of the same profundity and power to popularly shock an entire population as Trump’s election itself?

Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-3uu

Editor’s Note: Steve Fuller’s “A Man for All Seasons, Including Ours: Thomas More as the Patron Saint of Social Media” originally appeared in ABC Religion and Ethics on 23 February 2017.

Please refer to:

Image credit: Carolien Coenen, via flickr

November 2016 marked the five hundredth anniversary of the publication of Utopia by Thomas More in Leuven through the efforts of his friend and fellow Humanist, Desiderius Erasmus.

More is primarily remembered today for this work, which sought to show how a better society might be built by learning from the experience of other societies.

It was published shortly before he entered into the service of King Henry VIII, who liked Utopia. And as the monarch notoriously struggled to assert England’s sovereignty over the Pope, More proved to be a critical supporter, eventually rising to the rank of “Lord Chancellor,” his legal advisor.

Nevertheless, within a few years More was condemned to death for refusing to acknowledge the King’s absolute authority over the Pope. According to the Oxford English Dictionary, More introduced “integrity”—in the sense of “moral integrity” or “personal integrity”—into English while awaiting execution. Specifically, he explained his refusal to sign the “Oath of Supremacy” of the King over the Pope by his desire to preserve the integrity of his reputation.

To today’s ears this justification sounds somewhat self-serving, as if More were mainly concerned with what others would think of him. However, More lived at least two centuries before the strong modern distinction between the public and the private person was in general use.

He was getting at something else, which is likely to be of increasing relevance in our “postmodern” world, which has thrown into doubt the very idea that we should think of personal identity as a matter of self-possession in the exclusionary sense which has animated the private-public distinction. It turns out that the pre-modern More is on the side of the postmodernists.

We tend to think of “modernization” as an irreversible process, and in some important respects it seems to be. Certainly our lives have come be organized around technology and its attendant virtues: power, efficiency, speed. However, some features of modernity—partly as an unintended consequence of its technological trajectory—appear to be reversible. One such feature is any strong sense of what is private and public—something to which any avid user of social media can intuitively testify.

More proves to be an interesting witness here because while he had much to say about conscience, he did not presume the privacy of conscience. On the contrary, he judged someone to be a person of “good conscience” if he or she listened to the advice of trusted friends, as he had taken Henry VIII to have been prior to his issuing the Oath of Supremacy. This is quite different from the existentially isolated conception of conscience that comes into play during the Protestant Reformation, on which subsequent secular appeals to conscience in the modern era have been based.

For More, conscience is a publicly accessible decision-making site, the goodness of which is to be judged in terms of whether the right principles have been applied in the right way in a particular case. The platform for this activity is an individual human being who—perhaps by dint of fate—happens to be hosting the decision. However, it is presumed that the same decision would have been reached, regardless of the hosting individual. Thus, it makes sense for the host to consult trusted friends, who could easily imagine themselves as the host.

What is lacking from More’s analysis of conscience is a sense of its creative and self-authorizing character, a vulgarized version of which features in the old Frank Sinatra standard, “My Way.” This is the sense of self-legislation which Kant defined as central to the autonomous person in the modern era. It is a legacy of Protestantism, which took much more seriously than Catholicism the idea that humans are created “in the image and likeness of God.” In effect, we are created to be creators, which is just another way of saying that we are unique among the creatures in possessing “free will.”

To be sure, whether our deeds make us worthy of this freedom is for God alone to decide. Our fellows may well approve of our actions but we—and they—may be judged otherwise in light of God’s moral bookkeeping. The modern secular mind has inherited from this Protestant sensibility an anxiety—a “fear and trembling,” to recall Kierkegaard’s echo of St. Paul—about our fate once we are dead. This sense of anxiety is entirely lacking in More, who accepts his death serenely even though he has no greater insight into what lies in store for him than the Protestant Reformers or secular moderns.

Understanding the nature of More’s serenity provides a guide for coming to terms with the emerging postmodern sense of integrity in our data-intensive, computer-mediated world. More’s personal identity was strongly if not exclusively tied to his public persona—the totality of decisions and actions that he took in the presence of others, often in consultation with them. In effect, he engaged throughout his life in what we might call a “critical crowdsourcing” of his identity. The track record of this activity amounts to his reputation, which remains in open view even after his death.

The ancient Greeks and Romans would have grasped part of More’s modus operandi, which they would understand in terms of “fame” and “honour.” However, the ancients were concerned with how others would speak about them in the future, ideally to magnify their fame and honour to mythic proportions. They were not scrupulous about documenting their acts in the sense that More and we are. On the contrary, the ancients hoped that a sufficient number of word-of-mouth iterations over time might serve to launder their acts of whatever unsavoury character that they may have originally had.

In contrast, More was interested in people knowing exactly what he decided on various occasions. On that basis they could pass judgement on his life, thereby—so he believed—vindicating his reputation. His “integrity” thus lay in his life being an open book that could be read by anyone as displaying some common narrative threads that add up to a conscientious person. This orientation accounts for the frequency with which More and his friends, especially Erasmus, testified to More’s standing as a man of good conscience in whatever he happened to say or do. They contributed to his desire to live “on the record.”

More’s sense of integrity survives on Facebook pages or Twitter feeds, whenever the account holders are sufficiently dedicated to constructing a coherent image of themselves, notwithstanding the intensity of their interaction with others. In this context, “privacy” is something quite different from how it has been understood in modernity. Moderns cherish privacy as an absolute right to refrain from declaration in order to protect their sphere of personal freedom, access to which no one— other than God, should he exist—is entitled. For their part, postmoderns interpret privacy more modestly as friendly counsel aimed at discouraging potentially self-harming declarations. This was also More’s world.

More believed that however God settled his fate, it would be based on his public track record. Unlike the Protestant Reformers, he also believed that this track record could be judged equally by humans and by God. Indeed, this is what made More a Humanist, notwithstanding his loyalty to the Pope unto death.

Yet More’s stance proved to be theologically controversial for four centuries, until the Catholic Church finally made him the patron saint of politicians in 1935. Perhaps More’s spiritual patronage should be extended to cover social media users.

Author Information:Frank Scalambrino, University of Akron, franklscalambrino@gmail.com

Scalambrino, Frank. “How Technology Influences Relations to Self and Others: Changing Conceptions of Humans and Humanity.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 30-37.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3uf

Please refer to:

Image credit: Roman & Littlefield International

“Don’t be yourself, be a pizza. Everyone loves pizza.”—Pewdiepie

For the sake of easier and more efficient consumption, this article is written as a response to a series of six (6) questions. The questions are: (1) Why investigate “Changing conceptions of humans and humanity”? (2) What is “technologically-mediated identity”? (3) What are the ethical aspects involved in the technological-mediation of relations to one’s self and others? (4) What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity? (5) What is the philosophical issue with “psychoanalysis” an applied cybernetics? (6) What does it mean to say that social media eclipses reality?

§1 Why investigate “Changing conceptions of humans and humanity”?

There are two answers to this question. We’ll start with the easier one. First, the book series in which our book Social Epistemology & Technology appears avowedly takes the theme of “Changing conceptions of humans and humanity” as a concern it was established to address. Second, the Western Tradition in philosophy has concerned itself with this theme since at least the time of Plato. Briefly, recall Plato suggested the technology known as “writing” has adversely effected memory. On the one hand, this is a clear example of a technologically-mediated relation to self and others the effect of which alters humans and humanity. Plato thought the alteration was not for the better. Can you imagine what he’d say about “Grammarly”?

Because philosophy in the Western Tradition has concerned itself with this theme for such a long time, there are many philosophers who, with varying degrees of explicitness, address the theme. Two philosophers in particular, who were uniquely positioned in history to make predictions and observations, stand out in the Western Tradition. Those philosophers are Martin Heidegger (1889-1976) and Ernst Jünger (1895-1998). Separately, they both spoke of a world-wide change which they could philosophically see happening. And, importantly, the idea is that because we are in the midst of that change’s aftermath, many of us born into it, it may actually be more difficult for us to see than it was for them. All this goes to the second reason for embracing this important theme.

It is clear that technology has changed the way we relate to self and others. To be completely frank with you: I own and use an iPhone, I typed this document on a laptop, I own a PlayStation and have spent a good deal of energy playing video games in my time; I also listen to music on an iPod, have a LinkedIn account, FaceTime and Skype regularly, and have mindlessly watched YouTube and NetFlix for more hours than I can remember. I say all of this so readers will recognize that I am neither a Luddite nor a curmudgeon. Yet, in addition to all of the technology I use, I care about humanity; I also love philosophy, love thinking along with the philosophers, and have earned a Doctorate degree in philosophy. So, it is perhaps a duty to enunciate the Western Tradition themes and concerns publicly, as a responsible person with a PhD in philosophy; especially insofar as we may trace the very existence of many of the current problems in society today to the presence of various technologies.

Lastly, we should not be intimidated by the difficulty many first encounter when attempting to understand this perennial theme in philosophy. For example, despite its deep history, the winner of the “2015 World Technology Award in Ethics” explicitly related to this theme in our book (which was published in 2015) as “obscure,” outdated, or irrelevant for the 21st century. Though the word “World” is perhaps a misnomer, since award recipients are “judged by their own peers,” Dr. Shannon Vallor is in fact the current “President of the International Society for Philosophy and Technology (SPT),” and a tenured professor at Santa Clara University “in Silicon Valley.” Importantly, then, in her Notre Dame Philosophical Reviews (2015) review of our book, Dr. Vallor openly admitted her inability to understand the theme and its importance. On the one hand, she explains-away her difficulty noting, “Part of the difficulty is rooted in the book’s structure; for reasons that are never fully made clear by the editor, the chapters are sharply divided into two sections” (Vallor 2016). Because I was surprised to read this accusation (and almost certainly remembered explaining the reason for the book’s structure), I looked in the book. And, with all due respect to Dr. Vallor, on page one of the book she should have read:

As a volume in the Collective Studies in Knowledge and Society series, this book directly participates in three of the five activities targeted for the series. They are (I) Promoting philosophy as a vital, necessary public activity; (II) Analyzing the normative social dimensions of pursuing and organizing knowledge; and (III) Exploring changing conceptions of humans and humanity [emphasis added].Whereas both the content and the very existence of this book participate in the first of the targeted activities, the parts of the book are divided, respectively, across the other two activities. (Scalambrino 2015a: 1).

Thus, by the time she got around to calling my contributions to the “Changing conceptions of humans and humanity” section of our book “obscure ruminations,” I realized her “hardball” rhetoric was a substitute for actually engaging the material. For the record, however, I’m not criticizing playing “hardball.” I admire her spirit, and have no issue with “playing hardball.” Therefore, for the reasons noted above, and because even the “President of the International Society for Philosophy and Technology” found this perennial theme in the history of philosophy to be difficult, I hope this article will go toward providing clarity regarding this important theme.

§2 What is “technologically-mediated identity”?

The two most relevant ways to illustrate the notion of “technologically-mediated identity” are in terms of “existential constraints on identity” and “socially-constructed technological constraints and influences.” The basic idea here is that technology allows one to be as fully inauthentic as possible. To begin, there are clearly “existential constraints on identity.” The easiest way to understand this is to think about history. If you were born after the paperclip was invented, then it is not possible for you to invent the paperclip. The fact of your existence when and where it occurs constrains you from inventing the paperclip. And here, “inventor of the paperclip” is understood as a kind of identity. In other words, it is a statement made about someone’s identity, and it can be true.

Now, when we consider “technologically-mediated identity” the idea is twofold. First, the presence of technology makes various identities possible that would not be possible otherwise. Second, even if the presence of technology were only technologically-altering previously available identities, two issues would immediately manifest. On the one hand, technologically-mediated identities may require humans to be technologically-mediated or enhanced to sustain the identity. On the other hand, because the identities depend on technology for their presence in the world, they may, in fact, be anti-human human-identities. In other words, though they are identities which humans can pursue through the mediation of technology, the pursuit of such identities may be detrimental to the humans who pursue them.

The illusory nature of social media has already been well documented. Our book Social Epistemology & Technology has a large pool of references to peruse, for anyone who is interested. Often the content of a person’s social media is referred to as a “highlight reel” in that it misrepresents the reality of the person’s actual existence. This, in itself, is no surprise. However, the effects of social media, despite common knowledge of its illusory nature, are also well documented. These range from the depression, jealousy, and anxiety experienced by those who frequently spend time on social media to the many types of infidelity involving social media and the now essentially common association of social media with relationship infidelity. One of the ways to characterize what is happening is in terms of “technologically-mediated identity.”

In other words, social media—as a technology that allows one to use it to mediate relations to others—motivates viewers by presenting illusions. This can be seen in the presentation of identities which are illusory by being “highlight reels” or by simply allowing for greater amounts of deception. The operable distinction here would be analogous to the one between lying and lying by omission. Most certainly some people intentionally misrepresent themselves on social media; however, insofar as social media is by nature a kind of “highlight reel,” then it is like a lie by omission. This illustrates the notion of “technologically-mediated identity,” then, in that social media, as a kind of technological mediation, allows for the presentation of illusory identities. These identities, of course, motivate in multiple ways. Yet, just as they cannot portray the substance of an actual human existence, it is as if they entice viewers to adopt impossible identities.

Thus, the issue is not, and should not be presented as, between technologically-mediated identity and “natural” identity. Too many rhetorical options arise regarding the word “natural” to keep the water from muddying. Rather, the issue should be framed in terms of “inauthenticity” and the actual impossibility of recreating a “highlight reel” existence which does not include the technologically-suppressed “non-highlight reel” aspects of human life. This, of course, does not stop humans from pursuing technologically-mediated identities. What “inauthenticity” means philosophically here is that the pursuit of illusory or impossible identities is tantamount to suppressing the actual potentials (as opposed to virtual potentials) of which one’s existence can actualize. This can be understood as de-personalizing and even de-humanizing individuals who insert their selves into the matrix of virtual potentialities, thereby putting their actual potentials in the service of actualizing an identity impossible to actualize. For a full philosophical discussion of the de-personalizing and de-humanizing effects of technological mediation, see our book Social Epistemology & Technology.

§3 What are the ethical aspects involved in the technological-mediation of relations to one’s self and others?

The idea here is quite straightforward. Humans form habits, and the force of habit influences the quality of human experiences and future choices. Because the use of technology is neither immediately life-sustaining nor immediately expressive of a human function, technological mediation can be a part of habits that are formed; however, technological mediation is not an original force which can be shaped by habit for the sake of human excellence. Technological mediation can shape and constitute the relation between an original force, e.g. attraction, hunger, or empathy, and that to which the original force relates, yet in doing so, its relation to the original force can only be parasitic. That is to say, it cannot uproot the original force without eradicating what the thing is to which the original force belongs. For instance, we are not talking about using a pacemaker to keep someone’s heart pumping, we are talking about making it so a human would no longer need a beating heart. Such a technological alteration would raise questions such as: At what point is this no longer a human life?

It is by concealing the fact that technological mediation is not an original force that researchers in the service of profit-driven technologies can attempt to articulate technological mediation as virtuous, i.e. capable of constituting human excellence. Thus, Dr. Vallor speaks of the “commercial potential of science and technology” (Vallor 2015). Yet, those who articulate their guiding question as, for example, Dr. Vallor has “What does it mean to have a good life online?” clearly put the cart in front of the horse. Life is not “online,” and the term “life” in the phrase “life online” is necessarily a metaphor. However, Dr. Vallor, and those who follow her in committing the fallacy of “misplaced concreteness,” overlook two very important features of ethics and the “good life.” One, life is more primary than the internet, so at best the internet is in the service of life. Two, the “good life” includes criteria which the use of technology can directly undermine.

In order for an actual human to thrive in an actual human life, according to the philosophers of human excellence (e.g. the character ethics of Epicurus, Pyrrho, Aristotle, and Epictetus to name a few), the human would need to “unplug” and excel at being human, not at clicking on a keyboard or a touch screen or accumulating “likes” “followers” “friends” or “tweets.” As Nietzsche might have put it, were he here today: no matter how popular and rich Stephen Hawking (no disrespect intended) may be, his life does not exemplify human thriving. In fact, even Facebook no longer claims all those people are actually your “Friends.” As research continues to show that it is actually humanly impossible (cf. “Dunbar’s number”) to have as many “friends” as the illusory number social media allows users to flaunt. Thus, again, Dr. Vallor misplaces the ethical notion of “flourishing” when she speaks of “Flourishing on Facebook: virtue friendship & new social media” (Vallor 2012).

It is, of course, rather the case that the business of social media thrives by parasitically profiting from primal human forces by providing a platform for their virtual gratification. The best example that comes to mind is the manner in which Facebook initially allowed users to post pictures ostensibly as a benevolent platform for photo sharing. Eventually, however, Facebook claimed the right to use your pictures (since they are technically posted on the Facebook site) for the sake of advertising to your “friends.” The idea here is that the primal human forces of envy and jealousy are much easier to mobilize for the sake of sales if you can show a person what their friends have that they do not.

Therefore, the ethical aspects involved in the technological mediation of relations to one’s self and others indicate that human thriving belongs to human life, not the energy of life channeled through a virtual dimension for the sake of profit. To be sure the “commercial potential of science and technology” (Vallor 2015) is immense; however, the excellent actualization of “commercial potential” is not the excellent actualization of “human potential” which has always characterized “human thriving” according to philosophers in the Western Tradition (cf. Scalambrino 2016). Critics will be tempted to interject the idea of all the potential good money can do for human living conditions. Yet, that is not the topic under discussion; rather, the topic under discussion is the ethics constituting human excellence (“thriving”) regarding self and others.

§4 What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity?

For a more in-depth discussion of this issue see our Social Epistemology & Technology, especially in regard to Martin Heidegger’s discussion of cybernetics. Here are the same scholarly sources I referenced in our book for the sake of explicating the meaning of “cybernetics.”

The idea readers should have in mind is that cybernetics aims at “control.” Before corporations figured out how to use the “Cave Allegory” against academia itself, i.e. by funding and disseminating research the results of which would benefit the corporations and using pseudo-awards and marketing tactics to drown out the research of less well-funded scholars, philosophers of technology seemed unanimously to warn that by increasingly mediating our relation to our lives and environment with technology we would be increasingly losing freedom and placing ourselves under “control.” What was their philosophical justification for such a claim? It boils down to “cybernetics.” In today’s terms we might say, as soon as everything is “connected” you functionally become one of the things in the “internet of things.” Habitually functioning in such a way is not only inauthentic, it is also a kind of self-alienation, quite possibly to the points of de-personalization and even de-humanization (cf. Scalambrino 2015b).

Here are the quotations from our book: First,

Historically, cybernetics originated in a synthesis of control theory and statistical information theory in the aftermath of the Second World War, its primary objective being to understand intelligent behavior in both animals and machines” (Johnston 2008, 25-6; cf. Dechert 1966; cf. Jonas 1953).

According to Norbert Wiener’s Cybernetics; or, Control and Communication in the Animal and Machine (1948), “the newer study of automata, whether in the metal or in the flesh, is a branch of communication engineering,” and this involves a “quantity of information and [a] coding technique” (Wiener 1948, 42). Next,

Essentially, cybernetics proposed not only a new conceptualization of the machine in terms of information theory and dynamical systems theory but also an understanding of ‘life,’ or living organisms, as a more complex instance of this conceptualization rather than as a different order of being or ontology [emphasis added] (Johnston 2008, 31).

Again, in An Introduction to Cybernetics, the “unpredictable behavior of an insect that lives in and about a shallow pond, hopping to and fro among water, bank, and pebble, can illustrate a machine in which the state transitions correspond to” a probability-based pattern that can be analyzed statistically (Johnston 2008, 31). In this way, “‘cybernetic’ may refer to a technological understanding of the mechanisms guiding machines and living organisms” (Scalambrino 2015b, 107). The philosophical issue with “cybernetics,” then, (put simply) is that, on the one hand, it seeks to reduce human life to mechanical function, and, on the other hand, it can be exploited to functionally control humans.

§5 What is the philosophical issue with “psychoanalysis” as an applied cybernetics?

Kierkegaard famously said, “Life is not a problem to be solved, but a mystery to be lived.” The basic problem with psychoanalysis as an applied cybernetics, then, is that it treats life like a problem to be solved; however, even more than that, its cybernetic view of human life makes life appear as if it is something that can be controlled. In this way, despite its belief in “the unconscious,” psychoanalysis treats human life as if it were a machine. Thus, the idea that “unconscious influences,” traceable to childhood events, determine our actions undermines our confidence in our own free will.

Adopting the cybernetic view of human nature advocated through psychoanalysis functionalizes the mystery of life into “the unconscious.” It is supposed to be the case that the mysterious, as “unconscious,” can be understood. In this way, the unconscious influences contributing to, or perhaps even constituting, one’s “problem” can be revealed, and the revelation of these unconscious influences thereby “solves” the problem. However, as multiple existentialists, including Gabriel Marcel, have pointed out, the “functionalization” of the human being de-personalizes. If the human person is constituted through its choices and its respect for itself as the one who makes those choices, then a psychoanalytic cybernetic view of the human undermines a person’s self-realization. It, of course, does this by suggesting to persons that the freedom of their choosing is a type of illusion.

Finally, psychoanalytic techniques, which Freud developed from hypnotic trance induction, exploit a cybernetic “control theory.” The person receiving psychoanalytic “treatment,” traditionally known as the “analysand,” is initially and immediately placed in, what has traditionally been called, a “one down” position. This means the analysand is supposed to assume that the analyst has access to, whether it be in terms of knowledge or awareness, the unconscious of the analysand, and since the analysand is not supposed to understand the unconscious, this means the analysand is in a “lower” or “one down” position in relation to the analyst from the very inception of analysis. The induction of this “one down” position initiates the cybernetic mechanism of control over the analysand. It is as if, the very belief that psychoanalysis can “solve” the problems of one’s life is itself the “analysand’s” transference of control over to the person of the “analyst.” Thus, the cybernetic view of human nature advocated through psychoanalysis functionalizes human life, and by persuading an “analysand” to hand over their freedom—in the form of the belief in one’s own autonomous power of choice—by allowing for the very control that psychoanalytic theory claims to have power over also provides, what seems to be, evidence of its own confirmation.

§6 What does it mean to say that social media eclipses reality?

The idea at work here refers back to sections two (2) and three (3) of this article. Simply put, the idea is that technological mediation allows for humans to alter their relations to self and others. However, what researchers often call “interface” issues condition possible illusory understandings of self and others. Popularly, and in the earlier sections, this was invoked by describing the content found on social media as a “highlight reel.” Because humans can develop goals and regulate behaviors in terms of the interface issues of social media, it becomes appropriate to characterize one’s relation to reality as “eclipsed.”

Take, for example, the “highlight reel” aspect of social media discussed above and in our book Social Epistemology & Technology. Of course, this is just one of the interface issues which can “eclipse reality” for social media users. Yet, because it is perhaps the easiest to see, we will discuss it briefly here. The basic idea is that when one sets out to behave in such ways or perform such actions so as to contribute to their “highlight reel,” then one has allowed the means of technological mediation to become an end in itself.

In our book one of the ways we discussed how interface issues eclipse relations to others is in terms of procreation. Again, the existentialist Gabriel Marcel is an excellent source (cf. Marcel 1962). The idea is that one may direct the lives of their children inauthentically or even be motivated to technologically mediate various (“functionalized”) aspects of procreation itself (cf. Scalambrino 2017), being influenced by the presence and power of technological mediation. As existentialists like Marcel warn, however, there is at least a twofold trouble here. First, technologically mediating one’s relation to procreation is cybernetic insofar as it treats procreation as if it were completely functionalizable. Second, the ends toward which one may direct one’s children through technological mediation may, in fact, derive from means such as “interface issues.” In this way philosophical criticisms of technological mediation have gone so far as to suggest one’s relation to reality may be eclipsed.

References

Dechert, Charles R., editor. The Social Impact of Cybernetics. South Bend, IN: University of Notre Dame Press, 1966.

Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press, 2008.

Marcel, Gabriel. The Mystery of Being, Volume I: Reflection and Mystery. Translated by G. S. Fraser. South Bend: St. Augustine’s Press, 1950.

Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19 (1962): 27-38.

Scalambrino, Frank. “Futurology in Terms of the Bioethics of Genetic Engineering: Proactionary and Precautionary Attitudes Toward Risk with Existence in the Balance.” Social Epistemology & Futurology: Future of Future Generations. London: Roman & Littlefield International. 2017, in press.

Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt, 2016.

Scalambrino, Frank “Introduction: Publicizing the Social Effects of Technological Mediation,.” In Social Epistemology & Technology, edited by Frank Scalambrino, 1-12. London: Roman & Littlefield International, 2015a.

Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology & Technology, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.

Vallor, Shannon. “Flourishing on Facebook: Virtue Friendship & New Social Media.” Ethics and Information Technology, 14, no. 3 (2012): 185-199.

Vallor, Shannon. “Shannon Vallor Wins 2015 World Technology Award in Ethics.” 2015. https://www.scu.edu/news-and-events/press-releases/2016/january-2016/shannon-vallor-wins-2015-world-technology-award-in-ethics.html.

Vallor, Shannon. “Review of Social Epistemology and Technology.” Notre Dame Philosophical Reviews: An Electronic Journal. (August 4th 2016).

Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.

Author Information: Jason M. Pittman, Capitol Technology University, jmpittman@captechu.edu

Pittman, Jason M. “Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 21-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3tZ

Please refer to:

bionic_eye

Image credit: PhOtOnQuAnTiQu, via flickr

Abstract

Zero-knowledge proof serves as the fundamental basis for technological concepts of trust. The most familiar applied solution of technological trust is authentication (human-to-machine and machine-to-machine), most typically a simple password scheme. Further, by extension, much of society-generated knowledge presupposes the immutability of such a proof system when ontologically considering (a) the verification of knowing and (b) the amount of knowledge required to know. In this work, I argue that the zero-knowledge proof underlying technological trust may cease to be viable upon realization of partial transhumanism in the form of embedded nanotechnology. Consequently, existing normative social components of knowledge—chiefly, verification and transmission—may be undermined. In response, I offer recommendations on potential society-centric remedies in partial trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Password based authentication features prominently in daily life. For many us, authentication is a ritual repeated many times on any given day as we enter a username and password into various computing systems. In fact, research (Florêncio & Herley, 2007; Sasse, Steves, Krol, & Chisnell, 2014) revealed that we, on average, enter approximately eight different username and password combinations as many as 23 times a day. The number of times a computing system authenticates to another system is even more frequent. Simply put, authentication is normative in modern, technologically mediated life.

Indeed, authentication has been the normal modality of establishing trust within the context of technology (and, by extension, technology mediated knowledge) for several decades. Over the course of these decades, researchers have uncovered a myriad of flaws in specific manifestations of authentication—weak algorithms, buggy software, or even psychological and cognitive limits of the human mind. Upon closer inspection, one can surmise that the philosophy associated with passwords has not changed. Authentication continues to operate on the fundamental paradigm of a secret, a knowledge-prover, and a knowledge-verifier. The epistemology related to password-based authentication—how the prover establishes possession of the secret such that the verifier can trust the prover without the prover revealing the secret—presents a future problem.

A Partial Transhuman Reality

While some may consider transhumanism to be the province of science fiction, others such as Kurzweil (2005) argue that the merging of Man and Machine is already begun. Of notable interest in this work is partial-transhumanist nanotechnology or, in simple terms, the embedding of microscopic computing systems in our bodies. Such nanotechnology need not be fully autonomous but typically does include some computational sensing ability. The most advanced example are the nanomachines that are used in medicine (Verma, Vijaysingh, & Kushwaha, 2016). Nevertheless, such nanotechnology represents the blueprint for rapid advancement. In fact, research is well underway on using nanomachines (or nanite) for enhanced cognitive computations (Fukushima, 2016).

At the crossroads of partial transhumanism (nanotechnology) and authentication there appears to be a deeper problem. In short, partial-transhumanism may obviate the capacity for a verifier to trust whether a prover, in truth, possesses a secret. Should a verifier not be able to trust a prover, the entirety of authentication may collapse.

Much research does exist that investigates the mathematical basis, the psychological basis, and the technological basis for authentication. There has been little philosophical exploration of authentication. Work such as that of Qureshi, Younus, and Khan (2009) developed a general philosophical overview of password-based authentication but largely focused on developing a philosophical taxonomy to overlay modern password technology. The literature extending Qureshi et al. exclusively builds upon the strictly technical side of password-based authentication, ignoring the philosophical.

Accordingly, the purpose of this work is to describe the concepts directly linked to modern technological trust in authentication and demonstrate how, in a partial transhumanist reality, the concepts of zero-knowledge proof may cease to be viable. Towards this end, I will describe the conceptual framework underlying the operational theme of this work. Then, I explore the abstraction of technological trust as such relates to understanding proof of knowledge. This understanding of where trust fits into normative social epistemology will inform the subsequent description of the problem space. After that, I move on to describe the conceptual architecture of zero-knowledge proofs which serve as the pillars of modern authentication and how transhumanism may adversely impact such. Finally, I will present recommendations on possible society-centric remedies in both partial trans-humanistic as well as full trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Conceptual Framework

Establishing a conceptual framework before delving too far into building the case for trust ceasing to be viable in partial transhumanist reality will permit a deeper understanding of the issue at hand. Such a frame of reference must necessarily include a discussion of how technology inherently mediates our relationship with other humans and technologies. Put another way; technologies are unmistakably involved in human subjectivity while human subjectivity forms the concept of technology (Kiran & Verbeek, 2010). This presupposes a grasp of the technological abstraction though.

Broadly, technology in the context of this work is taken to mean qualitative (abstract) applied science as opposed to practical or quantitative applied of science. This definition follows closely with recent discussions on technology by Scalambrino (2016) and the body of work by Heidegger and Plato. In other words, technology should be understood as those modalities that facilitate progress relative to socially beneficial objectives. In specific, we are concerned with the knowledge modality as opposed to discrete mechanisms, objects, or devices.

What is more, the adjoining of technology, society, and knowledge is a critical element in the conceptual framework for this work. Technology is no longer a single-use, individualized object. Instead, technology is a social arbiter that has grown to be innate to what Idhe (1990) related as a normative human gestalt. While this view is a contrast to views such as offered by Feenberg (1999), the two are not exclusive necessarily.

Further, we must establish the component of our conceptual framework that evidences what it means to verify knowledge. One approach is a scientific model that procedurally quantifies knowledge within a predefined structure. Given the technological nature of this work, such may be inescapable at least as a cognitive bias. More abstractly though, verification of knowledge is conducted by inference whether by the individual or across social collectives. The mechanism of inference, in turn, can be expressed in proof.   Similarly, on inference through proof, another component in our conceptual framework corresponds to the amount of knowledge necessary to demonstrate knowing. As I discuss later, the amount of knowing is either full or limited. That is, proof of knowledge or proof without knowledge.

Technological Trust

The connection between knowledge and trust has a strong history of debate in the social epistemic context. This work is not intended to directly add to the debate surrounding trust. However, recognition of the debate is necessary to develop the bridge connecting trust and zero-knowledge proofs before moving onto zero-knowledge proof and authentication. Further, conceptualizing technological trust permits the construction of a foundation for the central proposition in this work.

To the point, Simon (2013) argued that knowledge relies on trust. McCraw (2015) extended this claim by establishing four components of epistemic trust: belief, communication, reliance, and confidence. These components are further grouped into epistemic (belief and communication) as well as trust (reliance and confidence) conditionals (2015). Trust, in this context, exemplifies the social aspect of knowledge insofar as we do not directly experience trust but hold trust as valid because of the collective position of validity.

Furthermore, Simmel (1978) perceived trust to be integral to society. That is, trust as a knowledge construct, exists in many disciplines and, per Origgi (2004) permeates our cognitive existence. Additionally, there is an argument to be made that, by using technology, we implicitly place trust in such technology (Kiran & Verbeek, 2010). Nonetheless, trust we do.

Certainly, part of such trust is due to the mediation provided by our ubiquitous technology. As well, trust in technology and trust from technology are integral functions of modern social perspectives. On the other hand, we must be cautious in understanding the conditions that lead to technological trust. Work by Idhe (1979; 1990) and others have suggested that technological trust stems from our relation to the technology. Perhaps closer to transhumanism, Levy (1998) offered that such trust is more associated with technology that extends us.

Technology that extends human capacity is a principal abstraction. As well, concomitant to technological trust is knowledge. While the conceptual framework for this work includes verification of knowledge as well as the amount of knowledge necessary to evidence knowing, there is a need to include knowledge proofs in the discourse.

Zero-Knowledge Proof

Proof of knowledge is a logical extension of the discussion of trust. Where trust can be thought as the mechanism through which we allow technology to mediate reality, proof of knowledge is how we come to trust specific forms of technology. In turn, proof of knowledge—specifically, zero-knowledge proof—provides a foundation for trust in technological mediation in the general case and technological authentication in the specific case.

The Nature of Proof

The construct of proof may adopt different meaning depending upon the enveloping context. In the context of this work, we use the operational meaning provided by Pagin (1994). In other words, the proof is established during the process of validating the correctness of a proposition. Furthermore, for any proof to be perceived as valid, such must demonstrate elements of completeness and soundness (Pagin, 1994; 2009).

There is, of course, a larger discourse on the epistemic constraints of proof (Pagin, 1994; Williamson, 2002; Marton, 2006). Such lies outside of the scope of this work however as we are not concerned with can proof be offered for knowledge but rather how proof occurs. In other words, we are interested in the mechanism of proof. Thus, for our purposes, we presuppose that proof of knowledge is possible and is so in through two possible operations: proof with knowledge and proof without knowledge.

Proof with Knowledge

A consequence of typical proof system is that all involved parties gain knowledge. That is, if I know x exists in a specific truth condition, I must present all relevant premises so that you can reach the same conclusion. Thus, the proposition is not only true or false to us both equally but also the means of establishing such truth or falsehood is transparent. This is what can be referred to as proof of knowledge.

In most scenarios, proof with knowledge is a positive mechanism. That is, the parties involved mutually benefit from the outcome. Mathematics and logic are primary examples of this proof state. However, when considering the case of technological trust in the form of authentication proof with knowledge is not desirable.

Proof Without Knowledge

Imagine that you that know that p is true. Further, you wish to demonstrate to me that you know this without revealing how you came to know or what it is exactly that you know. In other words, you wish to keep some aspect of the knowledge secret. I must validate that you know p without gaining any knowledge. This is the second state of proof known as zero-knowledge proof and forms the basis for technological trust in the form of authentication.

Goldwasser, Micali, and Rackoff (1989) defined zero-knowledge proofs as a formal, systematic approach to validating the correctitude of a proposition without communicating additional knowledge. Extra in this context can be taken to imply knowledge other than the proposition itself. An important aspect is that the proposition originates with a verifier entity as opposed to a prover entity. In response to the proposition to be proven, the prover completes an action without revealing any knowledge to the verifier other than the knowledge that the action was completed. If the proposition is probabilistically true, the verifier is satisfied. Note that the verifier and prover entities can be in the form of machine-to-human, human-to-human, or machine-to-machine.

Zero-knowledge proofs are the core of technological trust and, accordingly, authentication. While discrete instances of authentication exist practically outside of the social epistemic purview, the broader theory of authentication is, in fact, a socially collective phenomenon. That is, even in the abstract, authentication is a specific case for technologically mediated trust.

Authentication

The zero-knowledge proof abstraction translates directly into modern authentication modalities. In general, authentication involves a verifier issuing a request to prove knowledge and a prover establishing knowledge by means of a secret to the verifier. Thus, the ability to provide such proof in a manner that is consistent with the verifier request is technologically sufficient to authenticate (Syverson & Cervesato, 2000). However, there are subtleties within the authentication zero-knowledge proof that warrant discussion.

Authentication, or being authenticated, implies two technologically mediated realities. First, the authentication process relies upon the authenticating entity (i.e., the prover) possessing a secret exclusively. The mediated reality for both the verifier and the prover is that to be authenticated implies an identity. In simple terms, I am who I claim to be based on (a) exclusive possession of the secret; and (b) the ability to sufficiently demonstrate such through the zero-knowledge proof to the verifier. Likewise, the verifier is identified to the prover.

Secondly, authentication establishes a general right of access for the verifier based on, again, possession of an exclusive secret. Consequently, there is a technological mediation of what objects are available to the verifier once authenticated (i.e., all authorized objects) or not authenticated (i.e., no objects). Thus, the zero-knowledge proof is a mechanism of associating the prover’s identity with a set of objects in the world and facilitating access to those objects. That is to say, once authenticated, the identity has operational control within corresponding space over linked objects.

Normatively, authentication is a socially collective phenomenon despite individual authentication relying upon exclusive zero-knowledge proof (Van Der Meyden & Wilke, 2007). Principally, authentication is a means of interacting with other humans, technology, and society at large while maintaining trust. However, if authentication is a manifestation of technological trust, one must wonder if transhumanism may affect the zero-knowledge proof abstraction.

Transhumanism

More (1990) described transhumanism as a philosophy that embraces the profound changes to society and the individual brought about by science and technology. There is strong debate as to when such change will occur although most futurists argue that technology has already begun to transcend the breaking point of explosive growth. Technology in this context aligns with the conceptual framework of this work. As well, there is an agreement in the philosophical literature with the idea of such technological expansion (Bostrom, 1998; More, 2013).

Furthermore, transhumanism exists in two forms: partial transhumanism and full transhumanism (Kurzweil, 2005). This work is concerned with partial transhumanism exclusively. Furthermore, partial transhumanism is inclusive of three modalities. According to Kurzweil (2005), these modalities are (a) technology sufficient to manipulate human life genetically; (b) nanotechnology; and (c) robotics. In the context of this work, I am interested in the potentiality of nanotechnology.

Briefly, nanotechnology exists in several forms. The form central to this work involves embedding microscopic machines within human biology. These machines can perform any number of operations, including augmenting existing bodily systems. Along these lines, Vinge (1993) argued that a by-product of technological expansion will be the monumental increase in human intelligence. Although there are a variety of mechanisms by which technology will amplify raw brainpower, nanotechnology is a forerunner in the mind of Kurzweil and others.

What is more, the computational power of nanites is measurable and predictable (Chau, et al., 2005; Bhore, 2016). The amount of human intellectual capacity projected to result from nanotechnology may be sufficient to impart hyper-cognitive or even extrasensory abilities. With such augmentation, the human mind will be capable of computational decision-making well beyond existing technology.

While the notion of nanites embedded in our bodies, augmenting various biomechanical systems to the point of precognitive awareness of zero-knowledge proof verification, may strike some as science fiction, there is growing precedent. Existing research in the field of medicine demonstrates that at least partially autonomous nanites have a grounding in reality (Huilgol & Hede, 2006; Das et al., 2007; Murray, Siegel, Stein, & Wright, 2009). Thus, envisioning a near future where more powerful and autonomous nanites are available is not difficult.

Technological Trust in Authentication

The purpose of this work was to describe technological trust in authentication and demonstrate how, in a future partial transhumanist reality, the concepts of zero-knowledge proof will cease to be viable. Towards that end, I examined technological trust in the context of how and why such trust is established. Further, knowledge proofs were discussed with an emphasis on proofs without knowledge. Such led to an overview of authentication and, subsequently, transhumanism.

Based on the analysis so far, the technological trust afforded by such proof appears to be no longer feasible once embedded nanotechnology is introduced into humans. Nanite augmented cognition will result in the capability for a knowledge-prover to, on demand, compute knowledge sufficient to convince a knowledge-verifier. Outright, such a reality breaks the latent assumptions that operationalize the conceptual framework into related technology. That is, once the knowledge-verifier cannot trust that the knowledge is known by the prover, a significant future problem arises.

Unfortunately, the fields of computer science and computer engineering do not historically plan for paradigm shifting innovations well. Such is exacerbated when the paradigm shift has rapid onset after a long ramp-up time as is the case with the technological singularity. More specifically, partial transhumanism as considered in this work may have unforeseen effects beyond the scope of the fields that created the technology in the first place. The inability to handle rapid shifts is largely related to these fields posing what is type questions.

Similarly, the Collingridge dilemma tells us that, “…the social consequences of a technology cannot be predicated early in the life of the technology” (1980, p. 11). Thus, adequate preparation for the eventual collapse of zero-knowledge proof requires asking what ought to be. Such a question is a philosophical question. As it stands, recognition of social epistemology as an interdisciplinary field already exists (Froehlich, 1989; Fuller, 2005; Zins, 2006). More still, there is a precedent for philosophy informing the science of technology (Scalambrino, 2016) and assembling the foundation of future looking paradigm shifts.

Accordingly, a recommendation is for social epistemologists and technologists to jointly examine modifications to the abstract zero-knowledge proof such that the proof is resilient to nanite-powered knowledge computation. In conjunction, there may be a benefit in attempting to conceive of a replacement proof system that also harnesses partial-transhumanism for the knowledge-verifier in a manner commensurate with any increase in capacity for the knowledge-prover. Lastly, a joint effort may be able to envision a technologically mediated construct that does not require proof without knowledge at all.

References

Bhore, Pratik Rajan “A Survey of Nanorobotics Technology.” International Journal of Computer Science & Engineering Technology 7, no. 9 (2016): 415-422.

Bostrom, Nick. Predictions from Philosophy? How Philosophers Could Make Themselves Useful. (1998). http://www.nickbostrom.com/old/predict.html

Chau, Robert, Suman Datta, Mark Doczy, Brian Doyle, Ben Jin, Jack Kavalieros, Amlan Majumdar, Matthew Metz and Marko Radosavljevic. “Benchmarking Nanotechnology for High-Performance and Low-Power Logic Transistor Applications.” IEEE Transactions on Nanotechnology 4, no. 2 (2005): 153-158.

Collingridge, David. The Social Control of Technology. New York: St. Martin’s Press, 1980.

Das, Shamik, Alexander J. Gates, Hassen A. Abdu, Garrett S. Rose, Carl A. Picconatto, and James C. Ellenbogen “Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.” IEEE Transactions on Circuits and Systems I: Regular Papers 54, no. 11 (2007): 2528–2540.

Feenberg, Andew. Questioning Technology. London: Routledge, 1999.

Florencio, Dinei and Cormac Herley. “A Large-Scale Study of Web Password Habits.” In WWW 07 Proceedings of the 16th International Conference on World Wide Web. 657-666.

Froehlich, Thomas J. “The Foundations of Information Science in Social Epistemology.”  In System Sciences, 1989. Vol. IV: Emerging Technologies and Applications Track, Proceedings of the Twenty-Second Annual Hawaii International Conference, 4 (1989): 306-314.

Fukushima, Masato. “Blade Runner and Memory Devices: Reconsidering the Interrelations between the Body, Technology, and Enhancement.” East Asian Science, Technology and Society 10, no. 1 (2016): 73-91.

Fuller, Steve. “Social Epistemology: Preserving the Integrity of Knowledge About Knowledge.” In Handbook on the Knowledge Economy, edited by David Rooney, Greg Hearn and Abraham Ninan, 67-79. Cheltenham, UK: Edward Elgar, 2005.

Goldwasser, Shafi, Silvio M. Micali and Charles Rackoff. “The Knowledge Complexity of Interactive Proof Systems.” SIAM Journal on Computing 18, no. 1 (1989): 186-208.

Huilgol, Nagraj and Shantesh Hede. “ ‘Nano’: The New Nemesis of Cancer.” Journal of Cancer Research and Therapeutics 2, no. 4 (2006): 186–95.

Ihde, Don. Technics and Praxis. Dordrecht: Reidel, 1979.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth. Bloomington: Indiana University Press, 1990.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Penguin Books. 2005.

Lévy, Pierre. Becoming Virtual. Reality in the Digital Age. New York: Plenum Trade, 1998.

Marton, Pierre. “Verificationists Versus Realists: The Battle Over Knowability. Synthese 151, no. 1 (2006): 81-98.

More, Max. “Transhumanism: Towards a Futurist Philosophy.” Extropy, 6 (1990): 6-12.

More, Max. (2013) The philosophy of transhumanism, In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (eds M. More and N. Vita-More), John Wiley & Sons, Oxford. doi: 10.1002/9781118555927.ch1

Murday, J. S.; Siegel, R. W.; Stein, J.; Wright, J. F. (2009). Translational nanomedicine: Status assessment and opportunities. Nanomedicine: Nanotechnology, Biology and Medicine, 5(3). 251–273. doi:10.1016/j.nano.2009.06.001

Origgi, Gloria. “Is Trust an Epistemological Notion?” Episteme 1, no. 1 (2004): 61-72.

Pagin, Peter. “Knowledge of Proofs.” Topoi 13, no. 2 (1994): 93-100.

Pagin, Peter. “Compositionality, Understanding, and Proofs. Mind 118, no. 471 (2009): 713-737.

Qureshi, M. Atif, Arjumand Younus and Arslan Ahmed Khan Khan. “Philosophical Survey of Passwords.” International Journal of Computer Science Issues 1 (2009): 8-12.

Sasse, M. Angela, Michelle Steves, Kat Krol, and Dana Chisnell. “The Great Authentication Fatigue – And How To Overcome It.” In Cross-Cultural Design, edited by PLP Rau, 6th International Conference, CCD 2014 Held as Part of HCI International 2014 Heraklion, Crete, Greece, June 22-27, 2014: Proceedings, 228-239. Springer International Publishing: Cham, Switzerland.

Scalambrino, Frank. Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation. London New York: Rowman & Littlefield International, 2016.

Simmel, Georg.  The Philosophy of Money. London: Routledge and Kegan Paul, 1978.

Simon, Judith. “Trust, Knowledge and Responsibility in Socio-Technical Systems.” University of Vienna and Karlsruhe Institute of Technology, 2013. https://www.iiia.csic.es/en/seminary/trust-knowledge-and-responsibility-socio-technical-systems

Syverson, Paul and Iliano Cervesato. “The Logic of Authentication Protocols.” In Proceeding FOSAD ’00 Revised versions of lectures given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design on Foundations of Security Analysis and Design: Tutorial Lectures, 63-136. London: Springer-Verlag, 2001.

Williamson, Timothy. Knowledge and its Limits. Oxford University Press on Demand, 2002.

Van Der Meyden, Ron and Thomas Wilke. “Preservation of Epistemic Properties in Security Protocol Implementations.” In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge (2007): 212-221.

Vinge, Verner. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio March 30-31, 1993, NASA Conference Publication 10129 (1993): 11-22,

Verma, S., K. Vijaysingh and R. Kushwaha. “Nanotechnology: A Review.” In Proceedings of the Emerging Trends in Engineering & Management for Sustainable Development, Jaipur, India, 19–20 February 2016.

Zins, Chaim. “Redefining Information Science: From ‘Information Science’ to ‘Knowledge Science’.” Journal of Documentation 62, no. 4, (2006). 447-461.