Author Information: Adam Riggio, Royal Crown College, firstname.lastname@example.org.
Riggio, Adam. “Humanity’s Halting Problem.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 45-52.
The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-40X
Brett Frischmann and Evan Selinger have written Re-Engineering Humanity as a sustained and multifaceted critique of how contemporary trends in internet technology are slowly but surely shrinking the territory of human autonomy. Their work is a warning, as well as a description, of how internet technologies that ostensibly make our lives easier do so by taking control of our lives away from our self-conscious decision-making.
The second major part of four in Re-Engineering Humanity describes the core elements of these control mechanisms: the engineering of our social and physical surroundings through the internet of things and smart environments, the extension of our thinking processes into smartphone technology, the monitoring and feedback of our behaviour through surveillance and data mining, and reducing our legal obligations to corporate internet monopolies by reducing legal contracts to an unthinking step in activating computer applications.
Frischmann and Selinger discuss many topics in Re-Engineering Humanity, which a reasonably concise review cannot include. I will focus on what I think are the most important elements of the book which illustrate its shortcomings and revolutionary potential. I will address their core theme that contemporary trends in internet technology’s development constitute social engineering in the interests of oligarchic corporate elites, the insights and missed opportunities of their complex critique of click-to-contract legal obligations, and the problems in how they conceive of the creeping omnipresence of social engineering technology.
Mass Social Engineering in Private Interests
Frischmann and Selinger address an important problem for our contemporary society in Re-Engineering Humanity: how the corporate priorities and unquestioned presumptions of Silicon Valley business culture are subtly and profoundly transforming human habits and intuitions. Their position is far from the techno-phobia that is often a sadly common knee-jerk reaction to the omnipresence of internet technology in our lives.
They instead offer a more nuanced and complex attitude of skepticism to the current tendencies of internet technology. One of the earliest promises of the internet was that it offered a communication infrastructure that was free of corporate control. Most media in the twentieth century was corporate-controlled in all meaningful aspects. Major corporations created the content (television shows, films, mass-market magazines), and owned the physical networks that distributed that content (broadcast, cable and satellite television; printing facilities).
Where corporations did not control media infrastructure and content creation, governments did. Ostensibly, government ownership was the same as public ownership, but any reasonable research on the history and politics of state communications media will show how they were most often deployed against the liberation and freedom of the public. Even when government-owned media such as Britain’s BBC and Canada’s CBC are institutionally held at arm’s length from partisan control, pressure to enforce cultural conservatism and maintain a status quo oppressive for many groups such as the working classes or Indigenous peoples remain.
The decentralized nature of internet communication originally held great promise as a revolutionary medium, able to disrupt monopolistic corporate or government control of media. However, the contemporary giants of online media have since betrayed this promise in more thorough and chilling ways than any merely propagandist state broadcaster could ever manage.
Efficiency: A Dangerous Morality
The problem lies in the culture of Silicon Valley, whose morality centres efficiency as the highest good. If a technology increases the efficiency of some activity, then that technology is good. This moral orientation fails to account or even conceive of whether some inefficiency actually improves the quality of a particular process, or whether the means to achieve greater efficiency erodes a person’s autonomy or control over the direction of her life.
An everyday example they cite is the growing ubiquity of GPS as a means to navigate. GPS is useful in reaching a destination efficiently, particularly if you are in an unfamiliar environment, like a foreign city you are visiting for the first time, or even a neighbourhood of your own home with which you are unfamiliar.
But its ubiquity causes declines in personal knowledge responsibility for one’s actions. Reliance on GPS navigation degrades a person’s ability to develop intuitive knowledge of local landmarks or routes; you listen for GPS directions instead of paying attention to local geography and traffic patterns. Yet Silicon Valley culture of efficiency as the highest good prefers reliance on GPS because it avoids the errors of learning through experience.
Moreover, the danger of efficiency-centric morality grows when GPS reliance in navigation is combined with other technologies such as self-driving vehicles. You become no longer a driver, but a passenger. This can be a good thing in some circumstances: consider my own relaxing journeys by subway from my home to my workplace, where I read most of Re-Engineering Humanity.
But the morality of efficiency also pressures people to devote more and more of their lives to work for corporate employers or clients. Consider what the workaholic culture of companies like Amazon would do to a workforce of commuters by self-driving cars. Instead of my relaxing rides to work by public transit (where I can read the news, Re-Engineering Humanity, or more recently Toni Morrison’s Paradise), workers would be writing code, troubleshooting new app designs, or in stressful Slack meetings. The morality of efficiency seeks to find new technologies that enable people to devote more and more hours of their lives to the service of their corporate employers instead of the cultivation of their own lives, families, and personalities.
When Agreement Sidelines Knowledge
The union of GPS navigation and the ubiquity of self-driving vehicles remains in a potential future, which social political movements may still avert. Consider now an already-ubiquitous means of corporate control over our lives and personalities: the electronic contract. This is an ordinary moment in contemporary experience. You download a new application, and as you are installing it, an end-user licence agreement appears. The only way you can continue using the application that you just downloaded (and for which you possibly even paid) is by selecting the “Agree” option.
As you all know, no one ever reads these contracts. They are long, often displayed in small font (an additional impediment for elderly or visually-impaired humans), and composed in dense legalistic language and style. It is such a popular joke to speculate on what ridiculous promises we have made without our knowledge over even the most innocuous applications that it has been a storyline on South Park.
It is now common sense among the reasonably tech-savvy public that the connection between our knowledge of our legal obligations with our actual legal obligations has grown very tenuous. Parker and Stone’s “Human CentiPad” takes that grim joke to a remarkable absurdity and grotesquerie. But the truth underlying its premise is common knowledge, making Frischmann and Selinger’s analyses feel redundant; at least to me or someone with a similar profile.
Re-Engineering Humanity offers a brilliant phenomenological account of how the click-to-contract mechanism works. It is drawn from psychological studies of human behaviour, autonomic action, and thought during our perception of the click-to-contract event. Their analysis also draws on techniques from the phenomenological imaginative description, developed in the discipline of philosophy.
What Makes a Solution Fall Short?
Yet this comprehensive study of the contemporary actions and knowledge that click-to-contract encourages and suppresses does not teach us enough about it to develop antidotes. Reading their conclusion, I find their suggested solutions to the problems of click-to-contract remain inadequate.
Not that more mainstream suggested solutions are close to adequate for the problem themselves. The most notable critique is simply finding, in the interactions of a person with their applications, ways to force them to read the whole thing. The problem with that, of course, is that annoyed people will figure out how to skip it anyway, and this knowledge will proliferate quickly enough through society to beat the kludge.
They discuss another mainstream solution that consists in uploading all the terms of standard click-to-contract forms to legislation. Problems here are numerous as well. One clear issue is that moving the terms of contract to the law of the land also moves those terms and obligations out of the immediate consciousness of the people using the applications.
Another likely fatal hurdle here is that the land on which such law would apply is limited. There would likely be well over 100 different legal standards for click-to-contract obligations for nearly every legislative body on Earth. The technology industry’s leaders themselves would never allow such legal diversity to impede their business so.
Frischmann and Selinger’s three suggestions to develop solutions for click-to-contract problems improve upon these inadequate solutions, but not enough in my view. Their first idea, adding speed bumps to the click-to-contract process, merely differs in degree from the restoration of tedium in the mainstream recommendation of forcing all users to read all contracts in full. Contemporary users of technology – that is, pretty much everyone with access to computer technology of any kind – will resist any attempt to impede their speedy use of programs for their own purposes.
Recognizing the Genuinely Radical
Frischmann and Selinger’s last two proposals I think can be extremely effective in restoring autonomy to many of us. However, I take issue with their treatment of these suggestions: they are deeply radical, require totally reorienting the architecture and economy of the internet. Yet because the book largely focusses on critiques and analyses of existing structures and tendencies of human interactions with internet technology, these remarkable ideas receive a mere few pages of treatment in their conclusion. They deserve a book of their own, which would be an ambitious work of creative political economy for the internet.
Am I criticizing Re-Engineering Humanity because it falls short of the scope and ambition of Marx’s Capital? Yes, in fact, I am. But I see no reason why any thinker should not seek to write with such scope and ambition, since I can think of nothing significant one would lose in doing so.
These two proposals, enacted together, would explode the entire economic model of the internet as it exists today. Begin with their suggestion to replace our fragmentary app-by-app contracting interactions with a general agreement of reciprocal obligations between individuals and the major internet corporations. The ethics and morality of our mutual obligations with these corporate giants would then be at the forefront of our thoughts in using their products and platforms. As such, popular attitudes to the corporations that control our online spaces would be less likely to become as complacent as they are today.
I do not think Frischmann and Selinger fully understand just how radical their third proposal to replace the click-to-contract procedure is. They propose to end third-party benefits to click-to-contract agreements. This would outlaw data mining. Doing so would make the business models of such epoch-defining monopoly firms as Facebook, Google, and Palantir effectively illegal. Data mining – the mass collection, analysis, and use of information regarding every online interaction we make – is the most lucrative engine of the online economy.
Working to enact such a proposal would make an enemy of the most powerful people and organizations on Earth today. Combined with the anti-trust efforts underway among activists, as well as social democratic and liberal parties in Europe and North America against the internet giants, such a proposal would constitute a revolution against global oligarchy that would rank in significance with the beginning of the trade union movement.
Missing the Conceptual History
For all the genuine philosophical illumination that Frischmann and Selinger bring to the phenomenon of click-to-contract, there remains an important aspect of this new approach to contractual obligations that their analysis misses. This is why the contract itself is seen as having power despite its never being read. The contract has become a means of binding through agreement alone, despite agreement having been divorced from your knowledge of what is agreed upon.
In the book’s focus on how Silicon Valley’s customer consent processes have taken advantage of our presumptions about what permissions and freedoms contracts grant a company over its clients, Re-Engineering Humanity does not focus on how those presumptions arose in the first place. This requires a historical view of how the event of agreement to a contract among two or more parties became the definitive standard of economic legitimacy.
Such a historical and philosophical view would require grappling with an enormous body of contemporary work, as well as works from previous eras of the modern Western epoch whose meanings have been subject to debate for centuries. The major theorists in the West of what the grounding legitimacy of our political institutions and networks are, have centred that genesis in the contract itself. The weight of contracting as a procedure to legitimate institutions and social relations includes the contributions of such major thinkers as John Locke, Jean-Jacques Rousseau, Edmund Burke, John Rawls, and all the theorists and commentators who have followed and expanded their ideas.
This intellectual tradition does not appear in the analyses of Re-Engineering Humanity, and I admit that asking a single book to include a definitive breakdown of one of the central concepts of the last four centuries of Western political philosophy is utterly ridiculous. A complete historical analysis is far from necessary for Frischmann and Selinger’s purposes. Nonetheless, their analysis falls short of understanding why the contract itself is taken to be significant when all its epistemic components have been stripped away. The click-to-contract procedure carries the force of law, but none of the knowledge of obligations that justifies enforcement through the law. The phenomenology alone of click-to-contract leaves the reasons for its absurdity up solely to a reader’s intuitions.
An Entire Industry’s Foundational Absurdity
I hope this perversion of our cultural presumptions about the power of contracts strikes you, as a reader, as being very strange, possibly self-contradictory, and quite definitely absurd. Understanding the absurdity of our situation is a powerful realization in equipping us as people, members of a society, and internet users to think adequately about the data-driven technologies that engineer more and more of our daily lives and activities in the interests of corporate leaders and shareholders.
Frischmann and Selinger are very insightful in many of their analyses, but fall short on this matter of our absurdity, because they take it for granted so intuitively that they don’t analyze it. At one point, for example, they describe an actual product development proposal from a Silicon Valley company to embed sensors in people’s legs so that a GPS system can navigate customers’ own bodies so that they can respond to work-related emails and phone calls while jogging or getting other outdoor exercise. If Terry Gilliam were to read of this, he would envy that young startup executive for having thought of such a deranged idea before he did.
But what precisely is so deranged about our willing surrender of our freedom? The desire for your own slavery has recurred as an explicit topic of political philosophy since the writing of Spinoza. One field of philosophy that has picked up Spinoza’s question is the political analysis of desire developed in the works of such writers as Gilles Deleuze, Ernesto Laclau, Félix Guattari, Chantal Mouffe, Antonio Negri, and Franco Berardi. The primary arc of their political critique analyzed the social, institutional, and psychological vectors of the development of fascistic communal violence and the subsumption of personality in a universalizing group identity.
Silicon Valley’s business models of how to condition social and psychological desire do not offer such a unifying identity of mass mobilization. Their methods to manage desire are more subtle, omnipresent, and even more effective. Their primary focus is to use tools to make the everyday tasks of life increasingly effortless, and develop as many means as possible to generate income from those tools (through, for example, app sales, advertising, and data analysis client services). The most effective Silicon Valley business models develop new paths of least resistance in the daily lives of their potential customers.
Slippery Slope: A Speechless Image
The greatest success of Re-Engineering Humanity is its detailed explanation of the data-driven business model of Silicon Valley. That business model’s process can be summarized in three steps: 1) Using social and psychological engineering to build new paths of least resistance for the daily lives of customers; 2) expanding that customer base to an entire population through corporate partnerships with governments and schools where consent to data monitoring is often ill-informed; and 3) monetizing as much as possible of those new social processes for the profit of Silicon Valley’s corporate oligarchy.
While the authors intend it to be a foundation for creative resistance to social-psychological engineering in our lives, they primarily conceive of this technological development as, “this slippery slope that we are on.” Frischmann and Selinger repeat this exact phrase at least four times in the book’s conclusion, and refer frequently throughout to techno-social engineering as constituting a “slippery slope” to the mechanization of human subjectivity and society.
The phrase “slippery slope” comes with many problems. One of those is that it is a textbook logical fallacy, one of the easiest to understand in any elementary textbook on philosophical logic: it conflates distinct actions into a single action. This is an inadequate way of understanding processes, as it does not account for practically important internal distinctions.
But the most important problem with the term “slippery slope” as a way to understand contemporary technological social-psychological engineering Silicon Valley goes beyond its logical issues. The term does not inform readers about the actual process of such engineering. “Slippery slope,” the image connotes a falling-forward, a problem where the terrain itself of our action is always already of a nature that leaves you stumbling toward an inevitable conclusion.
The Power of Inertia
The term “slippery slope” erases our capacity to resist processes that could harm us, as well as the intentions of the business oligarchs who are building those techno-social-psychological engineering processes. Frischmann and Selinger would do well to learn from a simple concept, the path of least resistance, that better describes the character of how Silicon Valley engineers our online environment. I have been using the concept throughout this essay, in a sense described in the turn-of-the-century work of ecological and media theorist Paul Virilio.
I conclude with a brief sketch of this concept, in a few sentences that can constitute a far better understanding of technological social engineering creep. There is a natural inertia to human psychology for planned action, where we do our best to avoid unnecessary complication or stress in the everyday activities of our lives. The major business model of Silicon Valley today is to build applications or tools that use internet connectivity, user behaviour feedback on massive scales, and the related data analytics to simplify our activities or improve productivity and physical or mental health.
The new path of least resistance for our daily activities opens, thanks to Silicon Valley social-psychological engineering, through applications, machines, and tools that satisfy the powerful need of our tech industry’s oligarchical class to enrich itself. What’s more, those tools may improve a person’s life in some aspects, while degrading it in others.
I will give two examples: 1) efficiency through data mining requires mass surveillance that erodes people’s privacy while opening our lives to interventions from hostile powers such as government secret police; and 2) while an autonomous vehicle may save you the trouble of driving, it would open your commute to stressful workplace mandates like web conferences or sales calls instead of relaxing activity like gaming or reading while on public transit to a workplace.
There is no slippery slope, but a business model that manipulates our most personal desires to find the least stressful route to achieve our goals. The goal of that manipulation is the further enrichment of a business elite that would control populations of ordinary working people to satisfy their bottomless greed. When we foreground who acts and who benefits in a new technological system of social relationships, we have a better foundation for resistance and victory than the pacifying imagery on which Frischmann and Selinger unfortunately rely throughout Re-Engineering Humanity.
Contact details: email@example.com
Frischmann, Brett; and Evan Selinger. Re-Engineering Humanity. New York: Cambridge University Press, 2018.
Heath, Joseph. Enlightenment 2.0. New York: Harper Collins, 2014.
Parker, Trey; Matt Stone. “Human CentiPad.” South Park. Los Angeles: Comedy Central / Parker-Stone Studios, 2011.
Virilio, Paul. Open Sky. Translated by Julie Rose. New York: Verso, 1997.
 It may seem strange to include corporate security and espionage firm Palantir among the more famously epochal companies of the internet age. But I think of the matter this way: Google and Facebook have shaped the era of the internet that we currently live in, and Palantir appears set to shape the era that is now dawning.