Archives For Social Epistemology and Technology

Author Information: Jim Collier, Virginia Tech,

Collier, James H. “Social Epistemology for the One and the Many: An Essay Review.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 15-40.

Jim Collier’s article “Social Epistemology for the One and the Many” will be published in four parts. The pdf of the article includes all four parts as a single essay, and gives specific page references. Shortlinks:


Part One, Social Epistemology as Fullerism:

Part Two, Impoverishing Critical Engagement:

Part Three, We’re All Californians Now:

Fuller’s recent work has explored the nature of technological utopia.
Image by der bobbel via Flickr / Creative Commons


Third, Remedios and Dusek submit to a form of strict technological determinism as promulgated in the Californian ideology (Barbrook and Cameron 1996), packaged by Ray Kurzweil, and amplified by Fuller in his “trilogy on transhumanism” (vii). Such determinism leaves unexamined the questionable, if not ridiculous, claims made on behalf of transhumanism, generally, and in Fuller’s “own promethean project of transhumanism” (99).

Of Technological Ontology

Missing in the list delineating Fuller’s “extensive familiarity” (10) with an unbelievable array of academic fields and literatures are the history and the philosophy of technology. (As history, philosophy, and “many other fields” make the list, perhaps I am being nitpicky.) Still, I want to highlight, by way of contrast, what I take as a significant oversight in Remedios and Dusek’s account of Fullerism—a refined conception of technology; hence, a capitulation to technological determinism.

Remedios and Dusek do not mention technological determinism. Genetic determinism (69) and Darwinian determinism (75, 77-78) receive brief attention. A glossary entry for “determinism” (143) focuses on Pierre-Simon Laplace’s work. However, the strict technological determinism on which Fullerism stands goes unmentioned. With great assuredness, Remedios and Dusek repeat Ray Kurzweil’s Singularity mantra, with a Fullerian inflection, that: “converging technologies, such as biotechnology, nanotechnology, and computer technology, are transforming and enhancing humanity to humanity 2.0” (33).[1] Kurzweil’s proclamations, and Fuller’s conceptual piggybacking, go absent scrutiny. Unequivocally, a day will come in 2045 when humans—some humans at least—“will be transformed through technology to humanity 2.0, into beings that are Godlike” (94).

The “hard determinism” associated with Jacques Ellul in The Technological Society (1964), and, I argue, with Fuller as relayed by Remedios and Dusek, holds that technology acts as an uncontrollable force independent from social authority. Social organization and action derive from technological effects. Humans have no freedom in choosing the outcome of technological development—technology functions autonomously.

Depending on the relative “hardness” of the technological determinism on offer we can explain social epistemology, for example, as a system of thought existing for little reason other than aiding a technological end (like achieving humanity 2.0). Specifically, Fuller’s social and academic policies exist to assure a transhuman future. A brief example:

How does the university’s interdisciplinarity linked [sic] to transhumanism? Kurzweil claims that human mind and capacities can be uploaded into computers with increase in computing power [sic]. The problem is integration of those capacities and personal identity. Kurzweil’s Singularity University has not been able to address the problem of integration. Fuller proposes transhumanities promoted by university 2.0 for integration by the transhumanist. (51)

As I understand the passage, universities should develop a new interdisciplinary curriculum, (cheekily named the transhumanities) given the forthcoming technological ability to upload human minds to computers. Since the uploading process will occur, we face a problem regarding personal identity (seemingly, how we define or conceive personal identity as uploaded minds). The new curriculum, in a new university system, will speak to issues unresolved by Singularity University—a private think tank and business incubator.[2]

I am unsure how to judge adequately such reasoning, particularly in light of Remedios and Dusek’s definition of agent-oriented epistemology and suspicion of expertise. Ray Kurzweil, in the above passage and throughout the book, gets treated unreservedly as an expert. Moreover, Remedios and Dusek advertise Singularity University as a legitimate institution of higher learning—absent the requisite critical attitude toward the division of intellectual labor (48, 51).[3] Forgiving Remedios and Dusek for the all too human (1.0) sin of inconsistency, we confront the matter of how to get at their discussion of interdisciplinarity and transhumanism.

Utopia in Technology

Remedios and Dusek proceed by evaluating university curricula based on a technologically determined outcome. The problem of individual identity, given that human minds will be uploaded into computers, gets posed as a serious intellectual matter demanding a response from the contemporary academy. Moreover, the proposed transhumanities curriculum gets saddled with deploying outmoded initiatives, like interdisciplinarity, to render new human capacities with customary ideas of personal identity.

University 2.0, then, imagines inquiry into human divinity within a retrograde conceptual framework. This reactive posture results from the ease in accepting what must be. A tributary that leads back to this blithe acceptance of the future comes in the techno-utopianism of the Californian ideology.

The Californian ideology (Barbrook and Cameron 1996) took shape as digital networking technologies developed in Silicon Valley spread throughout the country and the world. Put baldly, the Californian ideology held that digital technologies would be our political liberators; thus, individuals would control their destinies. The emphasis on romantic individualism, and the quest for unifying knowledge, shares great affinity with the tenor of agent-oriented epistemology.

The Californian ideology fuses together numerous elements—entrepreneurialism, libertarianism, individualism, techno-utopianism, technological determinism—into a more or less coherent belief system. The eclecticism of the ideology—the dynamic, dialectical blend of left and right politics, well-heeled supporters, triumphalism, and cultishness—conjures a siren’s call for philosophical relevance hunting, intervention, and mimicry.

I find an interesting parallel in the impulse toward disembodiment by Kurzweil and Fuller, and expressed in John Perry Barlow’s “A Declaration of the Independence of Cyberspace” (1996). Barlow waxes lyrically: “Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge.”

The demigod Prometheus makes appearances throughout Knowing Humanity in the Social World. Remedios and Dusek have Fuller play the rebel trickster and creator. Fuller’s own transhumanist project creates arguments, policies, and philosophical succor that advocate humanity’s desire to ascend to godhood (7, 67). In addition, Fuller’s Promethean task possesses affinities with Russian cosmism (97-99), a project exploring human enhancement, longevity (cryonics), and space travel.[4] Fuller’s efforts result in more or less direct, and grandiose, charges of Gnosticism. Gnosticism, a tangled doctrine, can refer to the Christian heresy of seeking secret knowledge that, in direct association with the divine, allows one to escape the fetters of our lesser material world.

Gnostic Minds

Befitting a trickster, Fuller both accepts and rejects the charge of Gnosticism (102), the adjudication of which seems particularly irrelevant in the determinist framework of transhumanism. A related and distressing sense of pretense pervades Remedios and Dusek’s summary of Gnosticism, and scholastic presentation of such charges against Fuller. Remedios and Dusek do more than hint that such disputations involving Fuller have world historic consequences.

Imitating many futurists, Fuller repeats that “we are entering a new historical phase” (xi) in which our understanding of being human, of being an embodied human particularly, shifts how we perceive protections, benefits, and harms to our existence. This common futurist refrain, wedded to a commonsense observation, becomes transmogrified by the mention of gnosis (and the use of scare quotes):

The more we relativize the material conditions under which a “human” existence can occur, the more we shall also have to relativize our sense of what counts as benefits and harms to that existence. In this respect, Gnosticism is gradually being incorporated into our natural attitude toward the secular world. (xi)

Maybe. More likely, and less heroically, humans regularly reconsider who they are and determine what helps or hurts them absent mystical knowledge in consultation with the divine. As with many of Fuller’s broader claims, and iterations of such claims presented by Remedios and Dusek, I am uncertain how to judge the contention about the rise of Gnosticism as part of being in the world. Such a claim comes across as unsupported, certainly, and self-serving given the argument at hand.

The discussion of Gnosticism raises broader issues of how to understand the place, scope and meaningfulness of the contestations and provocations in which Fuller participates. Remedios and Dusek relay a sense that Fuller’s activities shape important social debates—Kitzmiller being a central example.[5] Still, one might have difficulty locating the playing field where Gnosticism influences general attitudes to matters either profane or sacred. How, too, ought we entertain Fuller’s statements that “Darwinism erodes the motivations of science itself” or “Darwin may not be a true scientist” (71)?

At best, these statements seem merely provocative; at worst, alarmingly incoherent. At first, Remedios and Dusek adjudicate these claims by reminding the reader of Fuller’s “sweeping historical and philosophical account” and “more sophisticated and historically informed version” (71) of creationism. Even when Fuller’s wrong, he’s right.

In this case, we need only accept the ever-widening parameters of Fuller’s historical and philosophical learning, and suspend judgment given the unresolved lessons of his ceaseless dialectic. Remedios and Dusek repeatedly make an appeal to authority (argumentum ad verecundiam) and, in turn, set social epistemology on a decidedly anti-intellectual footing. In part, such footing and uncritical attitude seems necessary to entertain Fuller’s “own promethean project of transhumanism” (99).

Transhuman Dialectic

Fuller’s Promethean efforts aside, transhumanism strives to maintain the social order in the service of power and money. A guiding assumption in the desire to transcend human evolution and embodiment involves who wins, come some form of end time (or “event”), and gets to take their profits with them. Douglas Rushkoff (2018) puts the matter this way:

It’s a reduction of human evolution to a video game that someone wins by finding the escape hatch and then letting a few of his BFFs come along for the ride. Will it be Musk, Bezos, Thiel…Zuckerberg? These billionaires are the presumptive winners of the digital economy — the same survival-of-the-fittest business landscape that’s fueling most of this speculation to begin with.[6] (

Fuller’s staging of endless dialectic—his ceaseless provocations (and attendant insincerity), his flamboyant exercises in rehabilitating distasteful and dangerous ideas—drives him to distraction. We need look no further than his misjudgment of transhumanism’s sociality. The contemporary origins of the desire to transcend humanity do not reside with longing to know the mind of god. Those origins reside with Silicon Valley neoliberalism and the rather more profane wish to keep power in heaven as it is on earth.

Fuller’s transhumanism resides with the same type of technological determinism as other transhumanist dialects and Kuzweil’s Singularity. A convergence, in some form, of computers, genetics, nanotechnology, robotics and artificial intelligence leads inevitably to artificial superintelligence. Transhumanism depends on this convergence. Moore’s Law, and Kurzweil’s Law of Accelerating Returns, will out.

This hard determinism renders practically meaningless—aside from fussiness, a slavish devotion to academic productivity, or perverse curiosity—the need for proactionary principles, preparations for human enhancement or alternative forms of existence, or the vindication of divine goodness. Since superintelligence lies on the horizon, what purpose can relitigating the history of eugenics, or enabling human experimentation, serve? [7] Epistemic agents can put aside their agency. Kurzweil asserts that skepticism and caution now threaten “society’s interests” (Pein 2017, 246). Remedios and Dusek portray Fuller as having the same disturbing attitude.

At the end of Knowing Humanity in the Social World, comes a flicker of challenge:

Fuller is totally uncritical about the similarly of utopian technologists’ and corporate leaders’ positions on artificial intelligence, synthetic biology, and space travel. He assumes computers can replace human investigators and allow the uploading of human thought and personality. However, he never discusses and replies to the technical and philosophical literature that claims there are limits to what is claimed can be achieved toward strong artificial intelligence, or with genetic engineering. (124)

A more well-drawn, critical epistemic agent would begin with normative ‘why’ and ‘how’ questions regarding Fuller’s blind spot and our present understanding of social epistemology.  Inattention to technological utopianism and determinism does not strike me as a sufficient explanation—although the gravity of fashioning such grand futurism remains strong—for Fuller’s approach. Of course, the “blind spot” to which I point may be nothing of the sort. We should, then, move out of the way and pacify ourselves by constructing neo-Kantian worlds, while our technological and corporate betters make space for the select to occupy.

The idea of unification, of the ability of the epistemic agent to unify knowledge in terms of their “worldview and purposes,” threads throughout Remedios and Dusek’s book. Based on the book, I cannot resolve social epistemology pre- and post- the year 2000. Agent-oriented epistemology assumes yet another form of determinism. Remedios and Dusek look more than two centuries into our past to locate a philosophical language to speak to our future. Additionally, Remedios and Dusek render social epistemology passive and reliant on the Californian political order. If epistemic unification appears only at the dawn of a technologically determined future, we are automatons—no longer human.


Allow me to return to the question that Remedios and Dusek propose as central to Fuller’s metaphysically-oriented, post-2000, work: “What type of being should the knower be” (2)? Another direct (and undoubtedly simplistic) answer—enhanced. Knowers should be technologically enhanced types of beings. The kinds of enhancements on which Remedios and Dusek focus come with the convergence of biotechnology, nanotechnology, and computer technology and, so, humanity 2.0.

Humanity 2.0’s sustaining premise begins with yet another verse in the well-worn siren song of the new change, of accelerating change, of inevitable change. It is the call of Silicon Valley hucksters like Ray Kurzweil.[8] One cannot deny that technological change occurs. Still, a more sophisticated theory of technological change, and the reciprocal relation between technology and agency, seems in order. Remedios and Dusek and Fuller’s hard technological determinism cries out for reductionism. If a technological convergence occurs and super-intelligent computers arise what purpose, then, in preparing by using humanity 1.0 tools and concepts?

Why would this convergence, and our subsequent disembodied state, not also dictate, or anticipate, even revised ethical categories (ethics 2.0, 109), government programs (welfare state 2.0, 110), and academic institutions (university 2.0, 122)? Such “2.0 thinking,” captive to determinism, would be quaint if not for very real horrors of endorsing eugenics and human experimentation. The unshakeable assuredness of the technological determinism at the heart Fuller’s work denies the consequences, if not the risk itself, for the risks epistemic agents “must” take.

In 1988, Steve Fuller asked a different question: How should we organize and pursue knowledge collectively? [9] This question assumes that human beings have cognitive limitations, limitations that might be ameliorated by humans acting in helpful concert to change society and ourselves. As a starting point, befitting the 1980’s, Fuller sought answers in “knowledge bearing texts” and an expansive notion of textual technologies and processes. This line of inquiry remains vital. But neither the question, nor social epistemology, belongs solely to Steve Fuller.

Let me return to an additional question. “Is Fuller the super-agent?” (131). In the opening of this essay, I took Remedios’s question as calling back to hyperbole about Fuller in the book’s opening. Fuller does not answer the question directly, but Knowing Humanity in the Social World does—yes, Steve Fuller is the super-agent. While Remedios and Dusek do not yet attribute godlike qualities to Fuller, agent-oriented epistemology is surely created in his image—an image formed, if not anticipated, by academic charisma and bureaucratic rationality.

As the dominant voice and vita in the branch of social epistemology of Remedios and Dusek’s concern, Fuller will likely continue to set the agenda. Still, we might harken back to the more grounded perspective of Jesse Shera (1970) who helped coin the term social epistemology. Shera defines social epistemology as:

The study of knowledge in society. It should provide a framework for the investigation of the entire complex problem of the nature of the intellectual process in society; the study of the ways in which society as a whole achieves a perceptive relation to its total environment. It should lift the study of the intellectual life from that of scrutiny of the individual to an enquiry into the means by which a society, nation, of culture achieve an understanding of stimuli which act upon it … a new synthesis of the interaction between knowledge and social activity, or, if you prefer, social dynamics. (86)

Shera asks a great deal of social epistemology. It is good work for us now. We need not await future gods.

An Editorial Note

Palgrave Macmillian do the text no favors. We too easily live with our complicity—publishing houses, editors, universities, and scholars alike—to think of scholarship only as output—the more, the faster, the better. This material and social environment influences our notions of social epistemology and epistemic agency in significant ways addressed indirectly in this essay. For Remedios and Dusek, the rush to press means that infelicitous phrasing and cosmetic errors run throughout the text. The interview between Remedios and Fuller needs another editorial pass. Finally, the book did not integrate the voices of its co-authors.

Contact details:


Barbrook, Richard and Andy Cameron. “The Californian Ideology.” Science as Culture 6, no. 1 (1996): 44-72.

Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” 1996.

Barron, Colin. “A Strong Distinction Between Humans and Non-humans Is No Longer Required for Research Purposes: A Debate Between Bruno Latour and Steve Fuller.” History of the Human Sciences 16, no. 2 (2003): 77–99.

Clark, William. Academic Charisma and the Origins of the Research University. University of Chicago Press, 2007.

Ellul, Jacques. The Technological Society. Alfred A. Knopf, 1964.

Frankfurt, Harry G. On Bullshit. Princeton University Press, 2005.

Fuller, Steve. Social Epistemology. Bloomington and Indianapolis, University of Indiana Press, 1988.

Fuller, Steve. Philosophy, Rhetoric, and the End of Knowledge: The Coming of Science and Technology Studies. Madison, WI: University of Wisconsin Press, 1993.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2001.

Fuller, Steve. “The Normative Turn: Counterfactuals and a Philosophical Historiography of Science.” Isis 99, no. 3 (September 2008): 576-584.

Fuller, Steve. “A Response to Michael Crow.” Social Epistemology Review and Reply Collective 25 November 2015.

Fuller, Steve and Luke Robert Mason. “Virtual Futures Podcast #3: Transhumanism and Risk, with Professor Steve Fuller.”  Virtual Futures 16 August 2017.

Grafton, Anthony. “The Nutty Professors: The History of Academic Charisma.” The New Yorker October 26, 2006.

Hinchman, Edward S. “Review of “Patrick J. Reider (ed.), Social Epistemology and Epistemic Agency: Decentralizing Epistemic Agency.” Notre Dame Philosophical Reviews 2 July 2018.

Horgan, John. “Steve Fuller and the Value of Intellectual Provocation.” Scientific American, Cross-Check 27 March 2015.

Horner, Christine. “Humanity 2.0: The Unstoppability of Singularity.” Huffpost 8 June 2017.

Joosse, Paul.“Becoming a God: Max Weber and the Social Construction of Charisma.” Journal of Classical Sociology 14, no. 3 (2014): 266–283.

Kurzweil, Ray. “The Virtual Book Revisited.”  The Library Journal 1 February 1, 1993.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Penguin Books, 2005.

Lynch, Michael. “From Ruse to Farce.” Social Studies of Science 36, vol 6 (2006): 819–826.

Lynch, William T. “Social Epistemology Transformed: Steve Fuller’s Account of Knowledge as a Divine Spark for Human Domination.” Symposion 3, vol. 2 (2016): 191-205.

McShane, Sveta and Jason Dorrier. “Ray Kurzweil Predicts Three Technologies Will Define Our Future.” Singularity Hub 19 April 2016.

Pein, Corey. Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley. Henry Holt and Co. Kindle Edition, 2017.

Remedios, Francis. Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Lexington Books, 2003.

Remedios, Francis X. and Val Dusek. Knowing Humanity in the Social World: The Path of Steve Fuller’s Social Epistemology. Palgrave Macmillan UK, 2018.

Rushkoff, Douglas. “Survival of the Richest: The wealthy are plotting to leave us behind.” Medium 5 July 2018.

Shera, J.H. Sociological Foundations of Librarianship. New York: Asia Publishing House, 1970.

Simonite, Tom. “Moore’s Law Is Dead. Now What?” MIT Technology Review 13 May 13, 2016.

Talbot, Margaret. “Darwin in the Dock.” The New Yorker December 5, 2005. 66-77.

Uebel, Thomas. Review of “Francis Remedios, Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Notre Dame Philosophical Reviews 3 March 2005.

Weber, Max. Economy and Society, 2 vols. Edited by Guenther Roth and Claus Wittich. Berkeley, CA; London; Los Angeles, CA: University of California Press, 1922 (1978).

[1] “Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. Of his 147 predictions since the 1990s, Kurzweil claims an 86 percent accuracy rate. At the SXSW Conference in Austin, Texas, Kurzweil made yet another prediction: the technological singularity will happen sometime in the next 30 years” ( I must admit to a prevailing doubt (what are the criteria?) regarding Kurzweil’s “86 percent accuracy rate.” I further admit that the specificity of number itself—86—seems like the kind of exact detail to which liars resort.

[2] Corey Pein (2017, 260-261) notes: “It was eerie how closely the transhuman vision promoted by Singularity University resembled the eugenicist vision that had emerged from Stanford a century before. The basic arguments had scarcely changed. In The Singularity Is Near, SU chancellor Kurzweil decried the ‘fundamentalist humanism’ that informs restriction on the genetic engineering of human fetuses.”

[3] Pein (2017, 200-201) observes: “… I saw a vast parking lot ringed by concrete barriers and fencing topped with barbed wire. This was part of the federal complex that housed the NASA Ames Research Center and a strange little outfit called Singularity University, which was not really a university but more like a dweeby doomsday congregation sponsored by some of the biggest names in finance and tech, including Google. The Singularity—a theoretical point in the future when computational power will absorb all life, energy, and matter into a single, all-powerful universal consciousness—is the closest thing Silicon Valley has to an official religion, and it is embraced wholeheartedly by many leaders of the tech industry.”

[4] Remedios and Dusek claim: “Cosmist ideas, advocates, and projects have continued in contemporary Russia” (98), but do little to allay the reader’s skepticism that Cosmism has little current standing and influence.

[5] In December 2006, Michael Lynch offered this post-mortem on Fuller’s participation in Kitzmiller: “It remains to be seen how much controversy Fuller’s testimony will generate among his academic colleagues. The defendants lost their case, and gathering from the judge’s ruling, they lost resoundingly … Fuller’s testimony apparently left the plaintiff’s arguments unscathed; indeed, Judge John E. Jones III almost turned Fuller into a witness for the plaintiffs by repeatedly quoting statements from his testimony that seemed to support the adversary case … Some of the more notable press accounts of the trial also treated Fuller’s testimony as a farcical sideshow to the main event [Lynch references Talbot, see above footnote 20] … Though some of us in science studies may hope that this episode will be forgotten before it motivates our detractors to renew the hostility and ridicule directed our way during the ‘science wars’ of the 1990s … in my view it raises serious issues that are worthy of sustained attention” (820).

[6] Fuller’s bet appears to be Peter Thiel.

[7] Remedios and Dusek explain: “The provocative Fuller defends eugenics and thinks it should not be rejected though stigmatized because of its application by the Nazis” (emphasis mine, 116-117). While adding later in the paragraph “… if the [Nazi] experiments really do contribute to scientific knowledge, the ethical and utilitarian issues remain” (117), Remedios and Dusek ignore the ethical issues to which they gesture. Tellingly, Remedios and Dusek toggle back to a mitigating stance in describing “Cruel experiments that did have eventual medical payoff were those concerning the testing of artificial blood plasmas on prisoners of war during WWII …” (117).

[8] “Ray Kurzweil is a genius. One of the greatest hucksters of the age …” (PZ Myers as quoted in Pein 2017, 245). From Kurzweil (1993): “One of the advantages of being in the futurism business is that by the time your readers are able to find fault with your forecasts, it is too late for them to ask for their money back.”

[9]  I abridged Fuller’s (1988, 3) fundamental question: “How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degree of access to one another’s activities?”

Author Information: Steve Fuller, University of Warwick,


Editor’s Note: Steve Fuller’s “A Man for All Seasons, Including Ours: Thomas More as the Patron Saint of Social Media” originally appeared in ABC Religion and Ethics on 23 February 2017.

Please refer to:

Image credit: Carolien Coenen, via flickr

November 2016 marked the five hundredth anniversary of the publication of Utopia by Thomas More in Leuven through the efforts of his friend and fellow Humanist, Desiderius Erasmus.

More is primarily remembered today for this work, which sought to show how a better society might be built by learning from the experience of other societies.

It was published shortly before he entered into the service of King Henry VIII, who liked Utopia. And as the monarch notoriously struggled to assert England’s sovereignty over the Pope, More proved to be a critical supporter, eventually rising to the rank of “Lord Chancellor,” his legal advisor.

Nevertheless, within a few years More was condemned to death for refusing to acknowledge the King’s absolute authority over the Pope. According to the Oxford English Dictionary, More introduced “integrity”—in the sense of “moral integrity” or “personal integrity”—into English while awaiting execution. Specifically, he explained his refusal to sign the “Oath of Supremacy” of the King over the Pope by his desire to preserve the integrity of his reputation.

To today’s ears this justification sounds somewhat self-serving, as if More were mainly concerned with what others would think of him. However, More lived at least two centuries before the strong modern distinction between the public and the private person was in general use.

He was getting at something else, which is likely to be of increasing relevance in our “postmodern” world, which has thrown into doubt the very idea that we should think of personal identity as a matter of self-possession in the exclusionary sense which has animated the private-public distinction. It turns out that the pre-modern More is on the side of the postmodernists.

We tend to think of “modernization” as an irreversible process, and in some important respects it seems to be. Certainly our lives have come be organized around technology and its attendant virtues: power, efficiency, speed. However, some features of modernity—partly as an unintended consequence of its technological trajectory—appear to be reversible. One such feature is any strong sense of what is private and public—something to which any avid user of social media can intuitively testify.

More proves to be an interesting witness here because while he had much to say about conscience, he did not presume the privacy of conscience. On the contrary, he judged someone to be a person of “good conscience” if he or she listened to the advice of trusted friends, as he had taken Henry VIII to have been prior to his issuing the Oath of Supremacy. This is quite different from the existentially isolated conception of conscience that comes into play during the Protestant Reformation, on which subsequent secular appeals to conscience in the modern era have been based.

For More, conscience is a publicly accessible decision-making site, the goodness of which is to be judged in terms of whether the right principles have been applied in the right way in a particular case. The platform for this activity is an individual human being who—perhaps by dint of fate—happens to be hosting the decision. However, it is presumed that the same decision would have been reached, regardless of the hosting individual. Thus, it makes sense for the host to consult trusted friends, who could easily imagine themselves as the host.

What is lacking from More’s analysis of conscience is a sense of its creative and self-authorizing character, a vulgarized version of which features in the old Frank Sinatra standard, “My Way.” This is the sense of self-legislation which Kant defined as central to the autonomous person in the modern era. It is a legacy of Protestantism, which took much more seriously than Catholicism the idea that humans are created “in the image and likeness of God.” In effect, we are created to be creators, which is just another way of saying that we are unique among the creatures in possessing “free will.”

To be sure, whether our deeds make us worthy of this freedom is for God alone to decide. Our fellows may well approve of our actions but we—and they—may be judged otherwise in light of God’s moral bookkeeping. The modern secular mind has inherited from this Protestant sensibility an anxiety—a “fear and trembling,” to recall Kierkegaard’s echo of St. Paul—about our fate once we are dead. This sense of anxiety is entirely lacking in More, who accepts his death serenely even though he has no greater insight into what lies in store for him than the Protestant Reformers or secular moderns.

Understanding the nature of More’s serenity provides a guide for coming to terms with the emerging postmodern sense of integrity in our data-intensive, computer-mediated world. More’s personal identity was strongly if not exclusively tied to his public persona—the totality of decisions and actions that he took in the presence of others, often in consultation with them. In effect, he engaged throughout his life in what we might call a “critical crowdsourcing” of his identity. The track record of this activity amounts to his reputation, which remains in open view even after his death.

The ancient Greeks and Romans would have grasped part of More’s modus operandi, which they would understand in terms of “fame” and “honour.” However, the ancients were concerned with how others would speak about them in the future, ideally to magnify their fame and honour to mythic proportions. They were not scrupulous about documenting their acts in the sense that More and we are. On the contrary, the ancients hoped that a sufficient number of word-of-mouth iterations over time might serve to launder their acts of whatever unsavoury character that they may have originally had.

In contrast, More was interested in people knowing exactly what he decided on various occasions. On that basis they could pass judgement on his life, thereby—so he believed—vindicating his reputation. His “integrity” thus lay in his life being an open book that could be read by anyone as displaying some common narrative threads that add up to a conscientious person. This orientation accounts for the frequency with which More and his friends, especially Erasmus, testified to More’s standing as a man of good conscience in whatever he happened to say or do. They contributed to his desire to live “on the record.”

More’s sense of integrity survives on Facebook pages or Twitter feeds, whenever the account holders are sufficiently dedicated to constructing a coherent image of themselves, notwithstanding the intensity of their interaction with others. In this context, “privacy” is something quite different from how it has been understood in modernity. Moderns cherish privacy as an absolute right to refrain from declaration in order to protect their sphere of personal freedom, access to which no one— other than God, should he exist—is entitled. For their part, postmoderns interpret privacy more modestly as friendly counsel aimed at discouraging potentially self-harming declarations. This was also More’s world.

More believed that however God settled his fate, it would be based on his public track record. Unlike the Protestant Reformers, he also believed that this track record could be judged equally by humans and by God. Indeed, this is what made More a Humanist, notwithstanding his loyalty to the Pope unto death.

Yet More’s stance proved to be theologically controversial for four centuries, until the Catholic Church finally made him the patron saint of politicians in 1935. Perhaps More’s spiritual patronage should be extended to cover social media users.

In this Special Issue, our contributors share their perspectives on how technology has changed what it means to be human and to be a member of a human society. These articles speak to issues raised in Frank Scalambrino’s edited book Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation.


Special Issue 4: “Social Epistemology and Technology”, edited by Frank Scalambrino

For the SERRC’s other special issues, please refer to:

Author Information:Frank Scalambrino, University of Akron,

Scalambrino, Frank. “How Technology Influences Relations to Self and Others: Changing Conceptions of Humans and Humanity.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 30-37.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Roman & Littlefield International

“Don’t be yourself, be a pizza. Everyone loves pizza.”—Pewdiepie

For the sake of easier and more efficient consumption, this article is written as a response to a series of six (6) questions. The questions are: (1) Why investigate “Changing conceptions of humans and humanity”? (2) What is “technologically-mediated identity”? (3) What are the ethical aspects involved in the technological-mediation of relations to one’s self and others? (4) What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity? (5) What is the philosophical issue with “psychoanalysis” an applied cybernetics? (6) What does it mean to say that social media eclipses reality?

§1 Why investigate “Changing conceptions of humans and humanity”?

There are two answers to this question. We’ll start with the easier one. First, the book series in which our book Social Epistemology & Technology appears avowedly takes the theme of “Changing conceptions of humans and humanity” as a concern it was established to address. Second, the Western Tradition in philosophy has concerned itself with this theme since at least the time of Plato. Briefly, recall Plato suggested the technology known as “writing” has adversely effected memory. On the one hand, this is a clear example of a technologically-mediated relation to self and others the effect of which alters humans and humanity. Plato thought the alteration was not for the better. Can you imagine what he’d say about “Grammarly”?

Because philosophy in the Western Tradition has concerned itself with this theme for such a long time, there are many philosophers who, with varying degrees of explicitness, address the theme. Two philosophers in particular, who were uniquely positioned in history to make predictions and observations, stand out in the Western Tradition. Those philosophers are Martin Heidegger (1889-1976) and Ernst Jünger (1895-1998). Separately, they both spoke of a world-wide change which they could philosophically see happening. And, importantly, the idea is that because we are in the midst of that change’s aftermath, many of us born into it, it may actually be more difficult for us to see than it was for them. All this goes to the second reason for embracing this important theme.

It is clear that technology has changed the way we relate to self and others. To be completely frank with you: I own and use an iPhone, I typed this document on a laptop, I own a PlayStation and have spent a good deal of energy playing video games in my time; I also listen to music on an iPod, have a LinkedIn account, FaceTime and Skype regularly, and have mindlessly watched YouTube and NetFlix for more hours than I can remember. I say all of this so readers will recognize that I am neither a Luddite nor a curmudgeon. Yet, in addition to all of the technology I use, I care about humanity; I also love philosophy, love thinking along with the philosophers, and have earned a Doctorate degree in philosophy. So, it is perhaps a duty to enunciate the Western Tradition themes and concerns publicly, as a responsible person with a PhD in philosophy; especially insofar as we may trace the very existence of many of the current problems in society today to the presence of various technologies.

Lastly, we should not be intimidated by the difficulty many first encounter when attempting to understand this perennial theme in philosophy. For example, despite its deep history, the winner of the “2015 World Technology Award in Ethics” explicitly related to this theme in our book (which was published in 2015) as “obscure,” outdated, or irrelevant for the 21st century. Though the word “World” is perhaps a misnomer, since award recipients are “judged by their own peers,” Dr. Shannon Vallor is in fact the current “President of the International Society for Philosophy and Technology (SPT),” and a tenured professor at Santa Clara University “in Silicon Valley.” Importantly, then, in her Notre Dame Philosophical Reviews (2015) review of our book, Dr. Vallor openly admitted her inability to understand the theme and its importance. On the one hand, she explains-away her difficulty noting, “Part of the difficulty is rooted in the book’s structure; for reasons that are never fully made clear by the editor, the chapters are sharply divided into two sections” (Vallor 2016). Because I was surprised to read this accusation (and almost certainly remembered explaining the reason for the book’s structure), I looked in the book. And, with all due respect to Dr. Vallor, on page one of the book she should have read:

As a volume in the Collective Studies in Knowledge and Society series, this book directly participates in three of the five activities targeted for the series. They are (I) Promoting philosophy as a vital, necessary public activity; (II) Analyzing the normative social dimensions of pursuing and organizing knowledge; and (III) Exploring changing conceptions of humans and humanity [emphasis added].Whereas both the content and the very existence of this book participate in the first of the targeted activities, the parts of the book are divided, respectively, across the other two activities. (Scalambrino 2015a: 1).

Thus, by the time she got around to calling my contributions to the “Changing conceptions of humans and humanity” section of our book “obscure ruminations,” I realized her “hardball” rhetoric was a substitute for actually engaging the material. For the record, however, I’m not criticizing playing “hardball.” I admire her spirit, and have no issue with “playing hardball.” Therefore, for the reasons noted above, and because even the “President of the International Society for Philosophy and Technology” found this perennial theme in the history of philosophy to be difficult, I hope this article will go toward providing clarity regarding this important theme.

§2 What is “technologically-mediated identity”?

The two most relevant ways to illustrate the notion of “technologically-mediated identity” are in terms of “existential constraints on identity” and “socially-constructed technological constraints and influences.” The basic idea here is that technology allows one to be as fully inauthentic as possible. To begin, there are clearly “existential constraints on identity.” The easiest way to understand this is to think about history. If you were born after the paperclip was invented, then it is not possible for you to invent the paperclip. The fact of your existence when and where it occurs constrains you from inventing the paperclip. And here, “inventor of the paperclip” is understood as a kind of identity. In other words, it is a statement made about someone’s identity, and it can be true.

Now, when we consider “technologically-mediated identity” the idea is twofold. First, the presence of technology makes various identities possible that would not be possible otherwise. Second, even if the presence of technology were only technologically-altering previously available identities, two issues would immediately manifest. On the one hand, technologically-mediated identities may require humans to be technologically-mediated or enhanced to sustain the identity. On the other hand, because the identities depend on technology for their presence in the world, they may, in fact, be anti-human human-identities. In other words, though they are identities which humans can pursue through the mediation of technology, the pursuit of such identities may be detrimental to the humans who pursue them.

The illusory nature of social media has already been well documented. Our book Social Epistemology & Technology has a large pool of references to peruse, for anyone who is interested. Often the content of a person’s social media is referred to as a “highlight reel” in that it misrepresents the reality of the person’s actual existence. This, in itself, is no surprise. However, the effects of social media, despite common knowledge of its illusory nature, are also well documented. These range from the depression, jealousy, and anxiety experienced by those who frequently spend time on social media to the many types of infidelity involving social media and the now essentially common association of social media with relationship infidelity. One of the ways to characterize what is happening is in terms of “technologically-mediated identity.”

In other words, social media—as a technology that allows one to use it to mediate relations to others—motivates viewers by presenting illusions. This can be seen in the presentation of identities which are illusory by being “highlight reels” or by simply allowing for greater amounts of deception. The operable distinction here would be analogous to the one between lying and lying by omission. Most certainly some people intentionally misrepresent themselves on social media; however, insofar as social media is by nature a kind of “highlight reel,” then it is like a lie by omission. This illustrates the notion of “technologically-mediated identity,” then, in that social media, as a kind of technological mediation, allows for the presentation of illusory identities. These identities, of course, motivate in multiple ways. Yet, just as they cannot portray the substance of an actual human existence, it is as if they entice viewers to adopt impossible identities.

Thus, the issue is not, and should not be presented as, between technologically-mediated identity and “natural” identity. Too many rhetorical options arise regarding the word “natural” to keep the water from muddying. Rather, the issue should be framed in terms of “inauthenticity” and the actual impossibility of recreating a “highlight reel” existence which does not include the technologically-suppressed “non-highlight reel” aspects of human life. This, of course, does not stop humans from pursuing technologically-mediated identities. What “inauthenticity” means philosophically here is that the pursuit of illusory or impossible identities is tantamount to suppressing the actual potentials (as opposed to virtual potentials) of which one’s existence can actualize. This can be understood as de-personalizing and even de-humanizing individuals who insert their selves into the matrix of virtual potentialities, thereby putting their actual potentials in the service of actualizing an identity impossible to actualize. For a full philosophical discussion of the de-personalizing and de-humanizing effects of technological mediation, see our book Social Epistemology & Technology.

§3 What are the ethical aspects involved in the technological-mediation of relations to one’s self and others?

The idea here is quite straightforward. Humans form habits, and the force of habit influences the quality of human experiences and future choices. Because the use of technology is neither immediately life-sustaining nor immediately expressive of a human function, technological mediation can be a part of habits that are formed; however, technological mediation is not an original force which can be shaped by habit for the sake of human excellence. Technological mediation can shape and constitute the relation between an original force, e.g. attraction, hunger, or empathy, and that to which the original force relates, yet in doing so, its relation to the original force can only be parasitic. That is to say, it cannot uproot the original force without eradicating what the thing is to which the original force belongs. For instance, we are not talking about using a pacemaker to keep someone’s heart pumping, we are talking about making it so a human would no longer need a beating heart. Such a technological alteration would raise questions such as: At what point is this no longer a human life?

It is by concealing the fact that technological mediation is not an original force that researchers in the service of profit-driven technologies can attempt to articulate technological mediation as virtuous, i.e. capable of constituting human excellence. Thus, Dr. Vallor speaks of the “commercial potential of science and technology” (Vallor 2015). Yet, those who articulate their guiding question as, for example, Dr. Vallor has “What does it mean to have a good life online?” clearly put the cart in front of the horse. Life is not “online,” and the term “life” in the phrase “life online” is necessarily a metaphor. However, Dr. Vallor, and those who follow her in committing the fallacy of “misplaced concreteness,” overlook two very important features of ethics and the “good life.” One, life is more primary than the internet, so at best the internet is in the service of life. Two, the “good life” includes criteria which the use of technology can directly undermine.

In order for an actual human to thrive in an actual human life, according to the philosophers of human excellence (e.g. the character ethics of Epicurus, Pyrrho, Aristotle, and Epictetus to name a few), the human would need to “unplug” and excel at being human, not at clicking on a keyboard or a touch screen or accumulating “likes” “followers” “friends” or “tweets.” As Nietzsche might have put it, were he here today: no matter how popular and rich Stephen Hawking (no disrespect intended) may be, his life does not exemplify human thriving. In fact, even Facebook no longer claims all those people are actually your “Friends.” As research continues to show that it is actually humanly impossible (cf. “Dunbar’s number”) to have as many “friends” as the illusory number social media allows users to flaunt. Thus, again, Dr. Vallor misplaces the ethical notion of “flourishing” when she speaks of “Flourishing on Facebook: virtue friendship & new social media” (Vallor 2012).

It is, of course, rather the case that the business of social media thrives by parasitically profiting from primal human forces by providing a platform for their virtual gratification. The best example that comes to mind is the manner in which Facebook initially allowed users to post pictures ostensibly as a benevolent platform for photo sharing. Eventually, however, Facebook claimed the right to use your pictures (since they are technically posted on the Facebook site) for the sake of advertising to your “friends.” The idea here is that the primal human forces of envy and jealousy are much easier to mobilize for the sake of sales if you can show a person what their friends have that they do not.

Therefore, the ethical aspects involved in the technological mediation of relations to one’s self and others indicate that human thriving belongs to human life, not the energy of life channeled through a virtual dimension for the sake of profit. To be sure the “commercial potential of science and technology” (Vallor 2015) is immense; however, the excellent actualization of “commercial potential” is not the excellent actualization of “human potential” which has always characterized “human thriving” according to philosophers in the Western Tradition (cf. Scalambrino 2016). Critics will be tempted to interject the idea of all the potential good money can do for human living conditions. Yet, that is not the topic under discussion; rather, the topic under discussion is the ethics constituting human excellence (“thriving”) regarding self and others.

§4 What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity?

For a more in-depth discussion of this issue see our Social Epistemology & Technology, especially in regard to Martin Heidegger’s discussion of cybernetics. Here are the same scholarly sources I referenced in our book for the sake of explicating the meaning of “cybernetics.”

The idea readers should have in mind is that cybernetics aims at “control.” Before corporations figured out how to use the “Cave Allegory” against academia itself, i.e. by funding and disseminating research the results of which would benefit the corporations and using pseudo-awards and marketing tactics to drown out the research of less well-funded scholars, philosophers of technology seemed unanimously to warn that by increasingly mediating our relation to our lives and environment with technology we would be increasingly losing freedom and placing ourselves under “control.” What was their philosophical justification for such a claim? It boils down to “cybernetics.” In today’s terms we might say, as soon as everything is “connected” you functionally become one of the things in the “internet of things.” Habitually functioning in such a way is not only inauthentic, it is also a kind of self-alienation, quite possibly to the points of de-personalization and even de-humanization (cf. Scalambrino 2015b).

Here are the quotations from our book: First,

Historically, cybernetics originated in a synthesis of control theory and statistical information theory in the aftermath of the Second World War, its primary objective being to understand intelligent behavior in both animals and machines” (Johnston 2008, 25-6; cf. Dechert 1966; cf. Jonas 1953).

According to Norbert Wiener’s Cybernetics; or, Control and Communication in the Animal and Machine (1948), “the newer study of automata, whether in the metal or in the flesh, is a branch of communication engineering,” and this involves a “quantity of information and [a] coding technique” (Wiener 1948, 42). Next,

Essentially, cybernetics proposed not only a new conceptualization of the machine in terms of information theory and dynamical systems theory but also an understanding of ‘life,’ or living organisms, as a more complex instance of this conceptualization rather than as a different order of being or ontology [emphasis added] (Johnston 2008, 31).

Again, in An Introduction to Cybernetics, the “unpredictable behavior of an insect that lives in and about a shallow pond, hopping to and fro among water, bank, and pebble, can illustrate a machine in which the state transitions correspond to” a probability-based pattern that can be analyzed statistically (Johnston 2008, 31). In this way, “‘cybernetic’ may refer to a technological understanding of the mechanisms guiding machines and living organisms” (Scalambrino 2015b, 107). The philosophical issue with “cybernetics,” then, (put simply) is that, on the one hand, it seeks to reduce human life to mechanical function, and, on the other hand, it can be exploited to functionally control humans.

§5 What is the philosophical issue with “psychoanalysis” as an applied cybernetics?

Kierkegaard famously said, “Life is not a problem to be solved, but a mystery to be lived.” The basic problem with psychoanalysis as an applied cybernetics, then, is that it treats life like a problem to be solved; however, even more than that, its cybernetic view of human life makes life appear as if it is something that can be controlled. In this way, despite its belief in “the unconscious,” psychoanalysis treats human life as if it were a machine. Thus, the idea that “unconscious influences,” traceable to childhood events, determine our actions undermines our confidence in our own free will.

Adopting the cybernetic view of human nature advocated through psychoanalysis functionalizes the mystery of life into “the unconscious.” It is supposed to be the case that the mysterious, as “unconscious,” can be understood. In this way, the unconscious influences contributing to, or perhaps even constituting, one’s “problem” can be revealed, and the revelation of these unconscious influences thereby “solves” the problem. However, as multiple existentialists, including Gabriel Marcel, have pointed out, the “functionalization” of the human being de-personalizes. If the human person is constituted through its choices and its respect for itself as the one who makes those choices, then a psychoanalytic cybernetic view of the human undermines a person’s self-realization. It, of course, does this by suggesting to persons that the freedom of their choosing is a type of illusion.

Finally, psychoanalytic techniques, which Freud developed from hypnotic trance induction, exploit a cybernetic “control theory.” The person receiving psychoanalytic “treatment,” traditionally known as the “analysand,” is initially and immediately placed in, what has traditionally been called, a “one down” position. This means the analysand is supposed to assume that the analyst has access to, whether it be in terms of knowledge or awareness, the unconscious of the analysand, and since the analysand is not supposed to understand the unconscious, this means the analysand is in a “lower” or “one down” position in relation to the analyst from the very inception of analysis. The induction of this “one down” position initiates the cybernetic mechanism of control over the analysand. It is as if, the very belief that psychoanalysis can “solve” the problems of one’s life is itself the “analysand’s” transference of control over to the person of the “analyst.” Thus, the cybernetic view of human nature advocated through psychoanalysis functionalizes human life, and by persuading an “analysand” to hand over their freedom—in the form of the belief in one’s own autonomous power of choice—by allowing for the very control that psychoanalytic theory claims to have power over also provides, what seems to be, evidence of its own confirmation.

§6 What does it mean to say that social media eclipses reality?

The idea at work here refers back to sections two (2) and three (3) of this article. Simply put, the idea is that technological mediation allows for humans to alter their relations to self and others. However, what researchers often call “interface” issues condition possible illusory understandings of self and others. Popularly, and in the earlier sections, this was invoked by describing the content found on social media as a “highlight reel.” Because humans can develop goals and regulate behaviors in terms of the interface issues of social media, it becomes appropriate to characterize one’s relation to reality as “eclipsed.”

Take, for example, the “highlight reel” aspect of social media discussed above and in our book Social Epistemology & Technology. Of course, this is just one of the interface issues which can “eclipse reality” for social media users. Yet, because it is perhaps the easiest to see, we will discuss it briefly here. The basic idea is that when one sets out to behave in such ways or perform such actions so as to contribute to their “highlight reel,” then one has allowed the means of technological mediation to become an end in itself.

In our book one of the ways we discussed how interface issues eclipse relations to others is in terms of procreation. Again, the existentialist Gabriel Marcel is an excellent source (cf. Marcel 1962). The idea is that one may direct the lives of their children inauthentically or even be motivated to technologically mediate various (“functionalized”) aspects of procreation itself (cf. Scalambrino 2017), being influenced by the presence and power of technological mediation. As existentialists like Marcel warn, however, there is at least a twofold trouble here. First, technologically mediating one’s relation to procreation is cybernetic insofar as it treats procreation as if it were completely functionalizable. Second, the ends toward which one may direct one’s children through technological mediation may, in fact, derive from means such as “interface issues.” In this way philosophical criticisms of technological mediation have gone so far as to suggest one’s relation to reality may be eclipsed.


Dechert, Charles R., editor. The Social Impact of Cybernetics. South Bend, IN: University of Notre Dame Press, 1966.

Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press, 2008.

Marcel, Gabriel. The Mystery of Being, Volume I: Reflection and Mystery. Translated by G. S. Fraser. South Bend: St. Augustine’s Press, 1950.

Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19 (1962): 27-38.

Scalambrino, Frank. “Futurology in Terms of the Bioethics of Genetic Engineering: Proactionary and Precautionary Attitudes Toward Risk with Existence in the Balance.” Social Epistemology & Futurology: Future of Future Generations. London: Roman & Littlefield International. 2017, in press.

Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt, 2016.

Scalambrino, Frank “Introduction: Publicizing the Social Effects of Technological Mediation,.” In Social Epistemology & Technology, edited by Frank Scalambrino, 1-12. London: Roman & Littlefield International, 2015a.

Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology & Technology, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.

Vallor, Shannon. “Flourishing on Facebook: Virtue Friendship & New Social Media.” Ethics and Information Technology, 14, no. 3 (2012): 185-199.

Vallor, Shannon. “Shannon Vallor Wins 2015 World Technology Award in Ethics.” 2015.

Vallor, Shannon. “Review of Social Epistemology and Technology.” Notre Dame Philosophical Reviews: An Electronic Journal. (August 4th 2016).

Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.

Author Information: Jason M. Pittman, Capitol Technology University,

Pittman, Jason M. “Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 21-29.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: PhOtOnQuAnTiQu, via flickr


Zero-knowledge proof serves as the fundamental basis for technological concepts of trust. The most familiar applied solution of technological trust is authentication (human-to-machine and machine-to-machine), most typically a simple password scheme. Further, by extension, much of society-generated knowledge presupposes the immutability of such a proof system when ontologically considering (a) the verification of knowing and (b) the amount of knowledge required to know. In this work, I argue that the zero-knowledge proof underlying technological trust may cease to be viable upon realization of partial transhumanism in the form of embedded nanotechnology. Consequently, existing normative social components of knowledge—chiefly, verification and transmission—may be undermined. In response, I offer recommendations on potential society-centric remedies in partial trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Password based authentication features prominently in daily life. For many us, authentication is a ritual repeated many times on any given day as we enter a username and password into various computing systems. In fact, research (Florêncio & Herley, 2007; Sasse, Steves, Krol, & Chisnell, 2014) revealed that we, on average, enter approximately eight different username and password combinations as many as 23 times a day. The number of times a computing system authenticates to another system is even more frequent. Simply put, authentication is normative in modern, technologically mediated life.

Indeed, authentication has been the normal modality of establishing trust within the context of technology (and, by extension, technology mediated knowledge) for several decades. Over the course of these decades, researchers have uncovered a myriad of flaws in specific manifestations of authentication—weak algorithms, buggy software, or even psychological and cognitive limits of the human mind. Upon closer inspection, one can surmise that the philosophy associated with passwords has not changed. Authentication continues to operate on the fundamental paradigm of a secret, a knowledge-prover, and a knowledge-verifier. The epistemology related to password-based authentication—how the prover establishes possession of the secret such that the verifier can trust the prover without the prover revealing the secret—presents a future problem.

A Partial Transhuman Reality

While some may consider transhumanism to be the province of science fiction, others such as Kurzweil (2005) argue that the merging of Man and Machine is already begun. Of notable interest in this work is partial-transhumanist nanotechnology or, in simple terms, the embedding of microscopic computing systems in our bodies. Such nanotechnology need not be fully autonomous but typically does include some computational sensing ability. The most advanced example are the nanomachines that are used in medicine (Verma, Vijaysingh, & Kushwaha, 2016). Nevertheless, such nanotechnology represents the blueprint for rapid advancement. In fact, research is well underway on using nanomachines (or nanite) for enhanced cognitive computations (Fukushima, 2016).

At the crossroads of partial transhumanism (nanotechnology) and authentication there appears to be a deeper problem. In short, partial-transhumanism may obviate the capacity for a verifier to trust whether a prover, in truth, possesses a secret. Should a verifier not be able to trust a prover, the entirety of authentication may collapse.

Much research does exist that investigates the mathematical basis, the psychological basis, and the technological basis for authentication. There has been little philosophical exploration of authentication. Work such as that of Qureshi, Younus, and Khan (2009) developed a general philosophical overview of password-based authentication but largely focused on developing a philosophical taxonomy to overlay modern password technology. The literature extending Qureshi et al. exclusively builds upon the strictly technical side of password-based authentication, ignoring the philosophical.

Accordingly, the purpose of this work is to describe the concepts directly linked to modern technological trust in authentication and demonstrate how, in a partial transhumanist reality, the concepts of zero-knowledge proof may cease to be viable. Towards this end, I will describe the conceptual framework underlying the operational theme of this work. Then, I explore the abstraction of technological trust as such relates to understanding proof of knowledge. This understanding of where trust fits into normative social epistemology will inform the subsequent description of the problem space. After that, I move on to describe the conceptual architecture of zero-knowledge proofs which serve as the pillars of modern authentication and how transhumanism may adversely impact such. Finally, I will present recommendations on possible society-centric remedies in both partial trans-humanistic as well as full trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Conceptual Framework

Establishing a conceptual framework before delving too far into building the case for trust ceasing to be viable in partial transhumanist reality will permit a deeper understanding of the issue at hand. Such a frame of reference must necessarily include a discussion of how technology inherently mediates our relationship with other humans and technologies. Put another way; technologies are unmistakably involved in human subjectivity while human subjectivity forms the concept of technology (Kiran & Verbeek, 2010). This presupposes a grasp of the technological abstraction though.

Broadly, technology in the context of this work is taken to mean qualitative (abstract) applied science as opposed to practical or quantitative applied of science. This definition follows closely with recent discussions on technology by Scalambrino (2016) and the body of work by Heidegger and Plato. In other words, technology should be understood as those modalities that facilitate progress relative to socially beneficial objectives. In specific, we are concerned with the knowledge modality as opposed to discrete mechanisms, objects, or devices.

What is more, the adjoining of technology, society, and knowledge is a critical element in the conceptual framework for this work. Technology is no longer a single-use, individualized object. Instead, technology is a social arbiter that has grown to be innate to what Idhe (1990) related as a normative human gestalt. While this view is a contrast to views such as offered by Feenberg (1999), the two are not exclusive necessarily.

Further, we must establish the component of our conceptual framework that evidences what it means to verify knowledge. One approach is a scientific model that procedurally quantifies knowledge within a predefined structure. Given the technological nature of this work, such may be inescapable at least as a cognitive bias. More abstractly though, verification of knowledge is conducted by inference whether by the individual or across social collectives. The mechanism of inference, in turn, can be expressed in proof.   Similarly, on inference through proof, another component in our conceptual framework corresponds to the amount of knowledge necessary to demonstrate knowing. As I discuss later, the amount of knowing is either full or limited. That is, proof of knowledge or proof without knowledge.

Technological Trust

The connection between knowledge and trust has a strong history of debate in the social epistemic context. This work is not intended to directly add to the debate surrounding trust. However, recognition of the debate is necessary to develop the bridge connecting trust and zero-knowledge proofs before moving onto zero-knowledge proof and authentication. Further, conceptualizing technological trust permits the construction of a foundation for the central proposition in this work.

To the point, Simon (2013) argued that knowledge relies on trust. McCraw (2015) extended this claim by establishing four components of epistemic trust: belief, communication, reliance, and confidence. These components are further grouped into epistemic (belief and communication) as well as trust (reliance and confidence) conditionals (2015). Trust, in this context, exemplifies the social aspect of knowledge insofar as we do not directly experience trust but hold trust as valid because of the collective position of validity.

Furthermore, Simmel (1978) perceived trust to be integral to society. That is, trust as a knowledge construct, exists in many disciplines and, per Origgi (2004) permeates our cognitive existence. Additionally, there is an argument to be made that, by using technology, we implicitly place trust in such technology (Kiran & Verbeek, 2010). Nonetheless, trust we do.

Certainly, part of such trust is due to the mediation provided by our ubiquitous technology. As well, trust in technology and trust from technology are integral functions of modern social perspectives. On the other hand, we must be cautious in understanding the conditions that lead to technological trust. Work by Idhe (1979; 1990) and others have suggested that technological trust stems from our relation to the technology. Perhaps closer to transhumanism, Levy (1998) offered that such trust is more associated with technology that extends us.

Technology that extends human capacity is a principal abstraction. As well, concomitant to technological trust is knowledge. While the conceptual framework for this work includes verification of knowledge as well as the amount of knowledge necessary to evidence knowing, there is a need to include knowledge proofs in the discourse.

Zero-Knowledge Proof

Proof of knowledge is a logical extension of the discussion of trust. Where trust can be thought as the mechanism through which we allow technology to mediate reality, proof of knowledge is how we come to trust specific forms of technology. In turn, proof of knowledge—specifically, zero-knowledge proof—provides a foundation for trust in technological mediation in the general case and technological authentication in the specific case.

The Nature of Proof

The construct of proof may adopt different meaning depending upon the enveloping context. In the context of this work, we use the operational meaning provided by Pagin (1994). In other words, the proof is established during the process of validating the correctness of a proposition. Furthermore, for any proof to be perceived as valid, such must demonstrate elements of completeness and soundness (Pagin, 1994; 2009).

There is, of course, a larger discourse on the epistemic constraints of proof (Pagin, 1994; Williamson, 2002; Marton, 2006). Such lies outside of the scope of this work however as we are not concerned with can proof be offered for knowledge but rather how proof occurs. In other words, we are interested in the mechanism of proof. Thus, for our purposes, we presuppose that proof of knowledge is possible and is so in through two possible operations: proof with knowledge and proof without knowledge.

Proof with Knowledge

A consequence of typical proof system is that all involved parties gain knowledge. That is, if I know x exists in a specific truth condition, I must present all relevant premises so that you can reach the same conclusion. Thus, the proposition is not only true or false to us both equally but also the means of establishing such truth or falsehood is transparent. This is what can be referred to as proof of knowledge.

In most scenarios, proof with knowledge is a positive mechanism. That is, the parties involved mutually benefit from the outcome. Mathematics and logic are primary examples of this proof state. However, when considering the case of technological trust in the form of authentication proof with knowledge is not desirable.

Proof Without Knowledge

Imagine that you that know that p is true. Further, you wish to demonstrate to me that you know this without revealing how you came to know or what it is exactly that you know. In other words, you wish to keep some aspect of the knowledge secret. I must validate that you know p without gaining any knowledge. This is the second state of proof known as zero-knowledge proof and forms the basis for technological trust in the form of authentication.

Goldwasser, Micali, and Rackoff (1989) defined zero-knowledge proofs as a formal, systematic approach to validating the correctitude of a proposition without communicating additional knowledge. Extra in this context can be taken to imply knowledge other than the proposition itself. An important aspect is that the proposition originates with a verifier entity as opposed to a prover entity. In response to the proposition to be proven, the prover completes an action without revealing any knowledge to the verifier other than the knowledge that the action was completed. If the proposition is probabilistically true, the verifier is satisfied. Note that the verifier and prover entities can be in the form of machine-to-human, human-to-human, or machine-to-machine.

Zero-knowledge proofs are the core of technological trust and, accordingly, authentication. While discrete instances of authentication exist practically outside of the social epistemic purview, the broader theory of authentication is, in fact, a socially collective phenomenon. That is, even in the abstract, authentication is a specific case for technologically mediated trust.


The zero-knowledge proof abstraction translates directly into modern authentication modalities. In general, authentication involves a verifier issuing a request to prove knowledge and a prover establishing knowledge by means of a secret to the verifier. Thus, the ability to provide such proof in a manner that is consistent with the verifier request is technologically sufficient to authenticate (Syverson & Cervesato, 2000). However, there are subtleties within the authentication zero-knowledge proof that warrant discussion.

Authentication, or being authenticated, implies two technologically mediated realities. First, the authentication process relies upon the authenticating entity (i.e., the prover) possessing a secret exclusively. The mediated reality for both the verifier and the prover is that to be authenticated implies an identity. In simple terms, I am who I claim to be based on (a) exclusive possession of the secret; and (b) the ability to sufficiently demonstrate such through the zero-knowledge proof to the verifier. Likewise, the verifier is identified to the prover.

Secondly, authentication establishes a general right of access for the verifier based on, again, possession of an exclusive secret. Consequently, there is a technological mediation of what objects are available to the verifier once authenticated (i.e., all authorized objects) or not authenticated (i.e., no objects). Thus, the zero-knowledge proof is a mechanism of associating the prover’s identity with a set of objects in the world and facilitating access to those objects. That is to say, once authenticated, the identity has operational control within corresponding space over linked objects.

Normatively, authentication is a socially collective phenomenon despite individual authentication relying upon exclusive zero-knowledge proof (Van Der Meyden & Wilke, 2007). Principally, authentication is a means of interacting with other humans, technology, and society at large while maintaining trust. However, if authentication is a manifestation of technological trust, one must wonder if transhumanism may affect the zero-knowledge proof abstraction.


More (1990) described transhumanism as a philosophy that embraces the profound changes to society and the individual brought about by science and technology. There is strong debate as to when such change will occur although most futurists argue that technology has already begun to transcend the breaking point of explosive growth. Technology in this context aligns with the conceptual framework of this work. As well, there is an agreement in the philosophical literature with the idea of such technological expansion (Bostrom, 1998; More, 2013).

Furthermore, transhumanism exists in two forms: partial transhumanism and full transhumanism (Kurzweil, 2005). This work is concerned with partial transhumanism exclusively. Furthermore, partial transhumanism is inclusive of three modalities. According to Kurzweil (2005), these modalities are (a) technology sufficient to manipulate human life genetically; (b) nanotechnology; and (c) robotics. In the context of this work, I am interested in the potentiality of nanotechnology.

Briefly, nanotechnology exists in several forms. The form central to this work involves embedding microscopic machines within human biology. These machines can perform any number of operations, including augmenting existing bodily systems. Along these lines, Vinge (1993) argued that a by-product of technological expansion will be the monumental increase in human intelligence. Although there are a variety of mechanisms by which technology will amplify raw brainpower, nanotechnology is a forerunner in the mind of Kurzweil and others.

What is more, the computational power of nanites is measurable and predictable (Chau, et al., 2005; Bhore, 2016). The amount of human intellectual capacity projected to result from nanotechnology may be sufficient to impart hyper-cognitive or even extrasensory abilities. With such augmentation, the human mind will be capable of computational decision-making well beyond existing technology.

While the notion of nanites embedded in our bodies, augmenting various biomechanical systems to the point of precognitive awareness of zero-knowledge proof verification, may strike some as science fiction, there is growing precedent. Existing research in the field of medicine demonstrates that at least partially autonomous nanites have a grounding in reality (Huilgol & Hede, 2006; Das et al., 2007; Murray, Siegel, Stein, & Wright, 2009). Thus, envisioning a near future where more powerful and autonomous nanites are available is not difficult.

Technological Trust in Authentication

The purpose of this work was to describe technological trust in authentication and demonstrate how, in a future partial transhumanist reality, the concepts of zero-knowledge proof will cease to be viable. Towards that end, I examined technological trust in the context of how and why such trust is established. Further, knowledge proofs were discussed with an emphasis on proofs without knowledge. Such led to an overview of authentication and, subsequently, transhumanism.

Based on the analysis so far, the technological trust afforded by such proof appears to be no longer feasible once embedded nanotechnology is introduced into humans. Nanite augmented cognition will result in the capability for a knowledge-prover to, on demand, compute knowledge sufficient to convince a knowledge-verifier. Outright, such a reality breaks the latent assumptions that operationalize the conceptual framework into related technology. That is, once the knowledge-verifier cannot trust that the knowledge is known by the prover, a significant future problem arises.

Unfortunately, the fields of computer science and computer engineering do not historically plan for paradigm shifting innovations well. Such is exacerbated when the paradigm shift has rapid onset after a long ramp-up time as is the case with the technological singularity. More specifically, partial transhumanism as considered in this work may have unforeseen effects beyond the scope of the fields that created the technology in the first place. The inability to handle rapid shifts is largely related to these fields posing what is type questions.

Similarly, the Collingridge dilemma tells us that, “…the social consequences of a technology cannot be predicated early in the life of the technology” (1980, p. 11). Thus, adequate preparation for the eventual collapse of zero-knowledge proof requires asking what ought to be. Such a question is a philosophical question. As it stands, recognition of social epistemology as an interdisciplinary field already exists (Froehlich, 1989; Fuller, 2005; Zins, 2006). More still, there is a precedent for philosophy informing the science of technology (Scalambrino, 2016) and assembling the foundation of future looking paradigm shifts.

Accordingly, a recommendation is for social epistemologists and technologists to jointly examine modifications to the abstract zero-knowledge proof such that the proof is resilient to nanite-powered knowledge computation. In conjunction, there may be a benefit in attempting to conceive of a replacement proof system that also harnesses partial-transhumanism for the knowledge-verifier in a manner commensurate with any increase in capacity for the knowledge-prover. Lastly, a joint effort may be able to envision a technologically mediated construct that does not require proof without knowledge at all.


Bhore, Pratik Rajan “A Survey of Nanorobotics Technology.” International Journal of Computer Science & Engineering Technology 7, no. 9 (2016): 415-422.

Bostrom, Nick. Predictions from Philosophy? How Philosophers Could Make Themselves Useful. (1998).

Chau, Robert, Suman Datta, Mark Doczy, Brian Doyle, Ben Jin, Jack Kavalieros, Amlan Majumdar, Matthew Metz and Marko Radosavljevic. “Benchmarking Nanotechnology for High-Performance and Low-Power Logic Transistor Applications.” IEEE Transactions on Nanotechnology 4, no. 2 (2005): 153-158.

Collingridge, David. The Social Control of Technology. New York: St. Martin’s Press, 1980.

Das, Shamik, Alexander J. Gates, Hassen A. Abdu, Garrett S. Rose, Carl A. Picconatto, and James C. Ellenbogen “Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.” IEEE Transactions on Circuits and Systems I: Regular Papers 54, no. 11 (2007): 2528–2540.

Feenberg, Andew. Questioning Technology. London: Routledge, 1999.

Florencio, Dinei and Cormac Herley. “A Large-Scale Study of Web Password Habits.” In WWW 07 Proceedings of the 16th International Conference on World Wide Web. 657-666.

Froehlich, Thomas J. “The Foundations of Information Science in Social Epistemology.”  In System Sciences, 1989. Vol. IV: Emerging Technologies and Applications Track, Proceedings of the Twenty-Second Annual Hawaii International Conference, 4 (1989): 306-314.

Fukushima, Masato. “Blade Runner and Memory Devices: Reconsidering the Interrelations between the Body, Technology, and Enhancement.” East Asian Science, Technology and Society 10, no. 1 (2016): 73-91.

Fuller, Steve. “Social Epistemology: Preserving the Integrity of Knowledge About Knowledge.” In Handbook on the Knowledge Economy, edited by David Rooney, Greg Hearn and Abraham Ninan, 67-79. Cheltenham, UK: Edward Elgar, 2005.

Goldwasser, Shafi, Silvio M. Micali and Charles Rackoff. “The Knowledge Complexity of Interactive Proof Systems.” SIAM Journal on Computing 18, no. 1 (1989): 186-208.

Huilgol, Nagraj and Shantesh Hede. “ ‘Nano’: The New Nemesis of Cancer.” Journal of Cancer Research and Therapeutics 2, no. 4 (2006): 186–95.

Ihde, Don. Technics and Praxis. Dordrecht: Reidel, 1979.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth. Bloomington: Indiana University Press, 1990.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Penguin Books. 2005.

Lévy, Pierre. Becoming Virtual. Reality in the Digital Age. New York: Plenum Trade, 1998.

Marton, Pierre. “Verificationists Versus Realists: The Battle Over Knowability. Synthese 151, no. 1 (2006): 81-98.

More, Max. “Transhumanism: Towards a Futurist Philosophy.” Extropy, 6 (1990): 6-12.

More, Max. (2013) The philosophy of transhumanism, In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (eds M. More and N. Vita-More), John Wiley & Sons, Oxford. doi: 10.1002/9781118555927.ch1

Murday, J. S.; Siegel, R. W.; Stein, J.; Wright, J. F. (2009). Translational nanomedicine: Status assessment and opportunities. Nanomedicine: Nanotechnology, Biology and Medicine, 5(3). 251–273. doi:10.1016/j.nano.2009.06.001

Origgi, Gloria. “Is Trust an Epistemological Notion?” Episteme 1, no. 1 (2004): 61-72.

Pagin, Peter. “Knowledge of Proofs.” Topoi 13, no. 2 (1994): 93-100.

Pagin, Peter. “Compositionality, Understanding, and Proofs. Mind 118, no. 471 (2009): 713-737.

Qureshi, M. Atif, Arjumand Younus and Arslan Ahmed Khan Khan. “Philosophical Survey of Passwords.” International Journal of Computer Science Issues 1 (2009): 8-12.

Sasse, M. Angela, Michelle Steves, Kat Krol, and Dana Chisnell. “The Great Authentication Fatigue – And How To Overcome It.” In Cross-Cultural Design, edited by PLP Rau, 6th International Conference, CCD 2014 Held as Part of HCI International 2014 Heraklion, Crete, Greece, June 22-27, 2014: Proceedings, 228-239. Springer International Publishing: Cham, Switzerland.

Scalambrino, Frank. Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation. London New York: Rowman & Littlefield International, 2016.

Simmel, Georg.  The Philosophy of Money. London: Routledge and Kegan Paul, 1978.

Simon, Judith. “Trust, Knowledge and Responsibility in Socio-Technical Systems.” University of Vienna and Karlsruhe Institute of Technology, 2013.

Syverson, Paul and Iliano Cervesato. “The Logic of Authentication Protocols.” In Proceeding FOSAD ’00 Revised versions of lectures given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design on Foundations of Security Analysis and Design: Tutorial Lectures, 63-136. London: Springer-Verlag, 2001.

Williamson, Timothy. Knowledge and its Limits. Oxford University Press on Demand, 2002.

Van Der Meyden, Ron and Thomas Wilke. “Preservation of Epistemic Properties in Security Protocol Implementations.” In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge (2007): 212-221.

Vinge, Verner. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio March 30-31, 1993, NASA Conference Publication 10129 (1993): 11-22,

Verma, S., K. Vijaysingh and R. Kushwaha. “Nanotechnology: A Review.” In Proceedings of the Emerging Trends in Engineering & Management for Sustainable Development, Jaipur, India, 19–20 February 2016.

Zins, Chaim. “Redefining Information Science: From ‘Information Science’ to ‘Knowledge Science’.” Journal of Documentation 62, no. 4, (2006). 447-461.

Author Information: Joshua Hackett, Purdue University,

Hackett, Joshua. “Funes, Digitized: Borges as a Guide to Fractured Digital Identities.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Peter Kaminski, via flickr

Thanks to the near constant writing and publishing of scattered thoughts online, our culture is becoming incapable of forgetting. Now fully ensconced in the digital era, a reexamination of memory and identity is required. What you say is permanent. This fragments identity, rather than solidifying it. Despite his death in 1986, Argentine author Jorge Luis Borges is able to shed light on the restructuring of identity that comes about through writing and the permanence of memory writing engenders. Borges’s literary works regularly call into crisis our collective understandings of these topics. Through a reading of his short stories “Borges and I” and “Funes, His Memory,” I will discuss how identity formation operates in a contemporary context, and the ethical implications of these changes.

Writing and Selfhood

Borges laments the written duplication of his identity in the story “Borges and I,” in which, he writes in the first person about “the other man.” “The other man” is Borges the writer. The story opens with a confession, “It’s Borges, the other one, that things happen to.”[1] The wording the narrator uses in describing the other man makes two things clear. First, he and the Borges known throughout the literary universe are two separate people. Second, “the other man” has his presence bestowed upon him by others. He cannot talk about Borges the writer’s actual existence. The other man doesn’t really live. He exists in the external, social world as a product of conversations about him. The narrating Borges tells us that he receives news of his counterpart by mail, while he worries that his own life is becoming more and more “mechanical.” He still walks through Buenos Aires, pausing to admire the architecture, but his literary counterpart’s presence slowly drains his life of its vibrant nature.[2] Due to the narrator’s alienation from the writer, he feels removed from his social life and his lack of social presence creates the sense that he is no longer fully human. The personality that he once claimed as his own has been estranged from him because others now associate him with the literary Frankenstein he has brought into being.

Writing has significantly impacted the ways in which selfhood is conceived. In Orality and Literacy, Walter Ong argues that writing is the tool which makes the dichotomy between a subject’s sense of interiority and exteriority possible. Ong claims that the move from oral culture to literate culture changes our locus of knowledge from sound to vision and this movement makes interior selfhood possible. This is because sound, Ong argues, is a unifying sense that allows us to be more fully immersed in the world, while vision dissects the world, dividing it into objects of perception and separating them from the perceiving subject.[3] Ong talks about vision as that which demands distinctness (and thus, separation), but makes it clear that auditory phenomena are of great importance even for the subjects of a literate culture. “Knowledge,” he writes, “is ultimately not a fractioning but a unifying phenomenon, a striving for harmony. Without harmony, an interior condition, the psyche is in bad health.”[4] Even in cultures dominated by writing, sound’s invasion of interiority helps create a harmony of self. Often, this need can be attended to by direct conversations with others, bringing their spoken words into the interior consciousness of the literate person. Through interactions with others, we are expected to tie our disparate senses of self into something more cohesive. The relationships which help us define ourselves, also help us find unity in our lives. When Borges writes a second Borges into existence, there is no amount of conversation which can fully harmonize the two. The distinction between them is only found on paper. The narrator and his written self have a symbiotic relationship, through which both can develop identities, but their separation cannot be reconciled in conversation. Borges, the writer is an exclusively visual figure.

This is only further complicated as others, in conversation with the flesh and blood Borges (who must be considered separate from the Borges who narrates “Borges and I,” since once it is written, that voice belongs to the writer, too), fail to treat the two as separate. For them, there is no conflict between what Borges writes and who Borges is. As more and more of our interactions are written rather than spoken, and we speak our identities less than we write them, reconciliation of these selves becomes a serious challenge. Externalized actions belong to a part of the self which cannot be retrieved by the subject.

Identity and Memory

For Ong, when non-literate cultures think of the self, they think of an entity totally bound up in its surroundings. Questions of identity are best answered by other people. Only other people should care about whether or not you are a good person—if you aren’t, it affects them most directly. In Orality and Literacy, Ong demonstrates this self/world collapse through a discussion of A.R. Luria’s interviews with illiterate workers in the Soviet Union. When asked about themselves, the interviewees primarily focused on their status in the world—how much wheat they were growing, whether they were married or had children, etc—their character held little interest. “What can I say about my own heart? How can I talk about my character?” one of the men interviewed asked Luria.[5] The written word shows subjects themselves for the first time, turning the self into an object for exploration. When memory is put to paper, the subject cannot think of the world solely as a unity of perceptions and phenomenal experience, because the self is made distinct from the world. The subject is allowed to read its own memories and see the self as another for the first time. Paul Ricoeur analyzes this phenomenon in depth in his aptly titled treatise on identity, Oneself as Another.

Ricoeur talks about identity creation in terms of arranging and rearranging memories into stories. I learn who I am by turning myself into the protagonist in my life’s novel. Given Ong’s understanding of identity, Ricoeur’s reliance on literature makes perfect sense. People have access to literature, which itself grants access to the inner-most thoughts of a narrator who arranges events into a story for others. It is no coincidence that Ricoeur refers to the process of creating an identity through narrative as “emplotment.” This is similar to how oral story telling functioned, but with the added benefit of being able to tell more and more complex stories with a huge number of original narrators as examples. Literary narrators provide an intensely personal in-depth model for understanding the self. Novels present these narratives visually. The written word creates the conditions under which the literary form can shape the way in which one thinks of the self.

Furthermore, due to the permanence of the published written word, the plots formed by narrators seem to be static. Once we are able to identify certain patterns and make relevant aesthetic judgments, it begins to appear as though a well-written story could not have unfolded in any other way. The same holds true when one thinks of his or her own life:

The paradox of emplotment is that it inverts the effect of contingency, in the sense of that which could have happened differently or which might not have happened at all, by incorporating it in some way to into the effect of necessity or probability exerted by the configuring act. The inversion of the effect of contingency into an effect of necessity is produced at the very core of the event … It only becomes an integral part of the story when understood after the fact, once it is transfigured by the so-to-speak retrograde necessity which provides from the temporal totality carried to its term.[6]

When put this way, emplotment sounds more like entrapment, snaring the events of a narrative into a plot which could have only unfolded as it did. Despite our ability to see randomness and contingency in many aspects of our lives, for Ricoeur, the moments which define one’s character for oneself possess a clear fatalism. These notions can change over time, but in the moment of remembering, narrative necessity is dominant.

Ricoeur is not thinking of writers when formulating his thoughts on identity. He has the reader in mind—the person who learns from novels and uses those structures to assemble an identity. The feeling of narrative necessity described above shifts along with one’s changing own sense of identity in time. This is complicated by the act of writing, and further complicated by the publication of those writings. Ricoeur believes that his project helps the subject understand the changes in his or her identity over time, but thanks to our personal memory’s selectivity, we rarely are forced to confront the changes in our self-understanding over time in a visceral manner. That is, until we write them down and are allowed to stare at thoughts that can seem so absurd years after their escape from the mind to publication. Borges’s difficulties with written fragmentation are beyond the scope of Ricoeur’s project. Yet, given the amount of writing being done in the digital sphere, it is time to take Borges’s concerns about identity more seriously.

Internet Selves

The number of written selves is growing daily. Borges’s writerly problem, paralyzed by the identity schism he has produced, is a reality for more people than ever. No longer is it just the Borgeses of the world who feel this disconnect between their written selves and the sort of identity Ricoeur describes. The massive variety of social networking platforms—from Facebook to Twitter to blogs—only compounds Borges’s problems for many of today’s life writers. Each of those platforms serves at least a slightly different purpose for their users. A contemporary Borges might have to think of not only Borges the literary writer, but Borges the Facebook user, Borges the blogger, and so on and so forth.

Today’s internet-bound selves do not enter into these modes of identity creation against their wills, and Borges didn’t write books because he wanted to feel like two (or more) different people. When describing Borges’s writing, Borges’s narrator declares, “that literature is my justification.”[7] Likely, before he became a writer, Borges would have dreamed of becoming one. That literature is his justification, because it is the culmination of a story of identity started long ago. To quote Ricoeur: “Identification with heroic figures clearly displays this otherness assumed as one’s own, but this is already latent in the identification of values which make us place a “cause” above our own survival. An element of loyalty is thus incorporated into character and makes it turn toward fidelity, hence toward maintaining the self.”[8] In writing, this maintenance often results in an unexpected fracturing. Borges feels it when he says “I recognize myself less in [Borges’s] books than in many others, or in the tedious strumming of a guitar.”[9] He goes so far as to say that Borges’s writing no longer even belongs to him, but now must be placed in the realm of language itself, or to the literary tradition which now claims it. His loyalty and fidelity to himself splinters off into a difficult, alienating loyalty to Borges the writer, dooming him “utterly and inevitably to oblivion.”[10]

This, on the surface, does not seem to be radically different from the various personae one must create at work, with family, and with friends, but those masks are left in their settings and need not follow the wearer around the immortalizing internet. In our everyday lives, we are allowed to forget the personae that fracture our identities and create a sense of wholeness in each moment of narration. Through the joint processes of remembering and forgetting, the subject creates an unstable unity within the present moment. Writing, on the other hand, makes an object of memory and denies us the ability to completely disregard past thoughts and actions.

When one goes back and re-reads previous narratives of self, their memories are made present in an often jarring way. People typically have a vague understanding of what they thought about themselves and the world at the age of sixteen, but they need not be visually confronted by those thoughts unless they are put to writing and published. The process of narrating identity turns our more regrettable youthful narratives into footnotes to be incorporated into the newer, supposedly better narratives of self formed in the present. When one is able to actually see these thoughts online, they are not easily reincorporated into a self. This happens as a result of vision’s demand for distinctions, but the issues run deeper.

In the case of online writing, these attempts at forming an identity are visible to large numbers of people and lack the intimacy of remembering. Anyone with access to these micro-publications can interpret them in any number of ways, thus separating these thoughts from our ability to recollect them as exclusively our own. Their public nature makes them a part of public, not personal, memory. Much like Borges’s work, each Twitter post enters the digital sphere and becomes a part of the broader internet tradition. Whether I like it or not, my Facebook page belongs to a digital culture more than it belongs to me. The words on the screen are separated from me the instant they are published. At least Borges’s contribution to the literary tradition was groundbreaking; most people writing their identities in the digital sphere are detached from a banal online presence that becomes permanently associated with your name, if not your sense of self.

Perfect Memory

In another of Borges’s short stories, “Funes, His Memory,” we see what happens when memory becomes perfect. While the link to internet culture here seems a bit tenuous—not least because Funes’s perfect memory is necessarily coupled with perfect perception—the apparent permanence of all externalized memories has a weaker but comparable effect. Unlike blog posts, which place the writer’s thoughts outside of himself or herself, Funes’s memory does not require externalization. However, our only access to Funes’s memory is through writing, and the narrator routinely uses external allusions to describe Funes’s memory. The narrator writes Funes’s story as a part of an anthology of scientific essays on Funes,[11] bringing his memory thoroughly into a space meant to be reserved for objectivity.

The narrator expresses confusion that experiments were not done on Funes, despite the fact that no cinematographers existed during Funes’s life and the phonograph had yet to be invented.[12] The narrator offers no explanation of why those particular items would have needed to be invented to make scientific research on Funes a possibility, but still cannot resist making the comparison to these technological memory aids. Mediation is necessary because Funes’s subjectivity is incomprehensible to those of us with imperfect memories. For the narrator, to think about Funes is to think in simile and metaphor, relating him to the objects that were designed to capture the world in ways that are not otherwise possible for human subjects. He is the embodiment of the technological advances yet to come. The absolute completeness of Funes’s memory makes the idea of narratives of identity absurd. He doesn’t need shortcuts to figure out who he is; he is the totality of his experiences which are always available to him.

While Funes saw a world so rich, clear, and distinct that it was “unbearably precise,”[13] he possessed an “extraordinary remoteness”[14] toward others. For Funes, the world appeared with such exquisite detail that he often sat in his room with the lights out in an effort to dull his perceptions. Any exposure to detail that would occupy his mind in the present would also leave each and every speck of perception permanently lodged in his brain. He has too many specifics constantly at his disposal. Darkening his room was the closest thing he could get to forgetting, abstracting, and generalizing.

At one point, Funes becomes frustrated with how many words are required to express certain numbers, making some numeric phrases inefficient. He tries to make a new system of numbers for himself, but his way of assigning names to numbers could scarcely be considered a system at all. He gives new names to each number to eliminate the need for multiple digits. When the narrator attempts to explain to him that this is not a systematic way of doing things, Funes does not understand or care—he has no need for systems because the totality of his existence is constantly at his fingertips.[15] Each moment is so vivid and distinct that everything requires a unique name to describe it. The perpetual specificity of his world makes it impossible to imagine Funes narrating his own existence, or treating others as anything more than incredibly specific objects of perception. His ability to recall every last detail of his experience makes for an impenetrably different experience from our own. To quote the narrator, “I have no idea how many stars he saw in the sky.”[16]

The Death of Creativity

Here we can form the link to the digital realm more clearly. The internet acts as an immediate access point for a huge volume of thoughts, the vast majority of which subjects cannot legitimately claim as one’s own. Though people may claim to have thought up the ideas they stole from a website, this false ownership is of a different type than the (also potentially false) sense of ownership which comes with an active narrative structuring of thought and identity. While one’s sense of identity may be a mere fictional construct, creating that identity and holding it as one’s own is an integral part of the human experience in a literate culture. Our need to narrate through creative remembering and forgetting allows for the opportunity to say “this is me.”

Funes does not require creativity, because his memory renders it unnecessary. He learns Latin, English, French and Portuguese from dictionaries, but cannot understand why each has only a single word for various perceptions. He uses his experiences of dogs as an example. Every particular iteration of a dog that he has ever seen immediately accessible for Funes, so is every angle that those dogs were seen from, and they all seemed like different animals.[17]  At the end of the story, the narrator realizes exactly what that means for his own interactions with Funes. “I was struck by the thought that every word I spoke, every expression of my face or motion of my hand would endure in his implacable memory,” the narrator writes, “I was rendered clumsy by the fear of making pointless gestures,” and then, suddenly, the plot comes to a halt with a final terse statement, “Ireneo Funes died in 1889 of pulmonary congestion.”[18]

In shifting abruptly from his apprehension about being lodged firmly in Funes’s memory to Funes’s death, the narrator expresses the terror that comes from thorough objectification. Not just any objectification, but objectification which one does not control. As stated above, narrating and writing the self are acts of self-objectification, but both are explicitly for our own benefit. Be it for the sake of understanding ourselves or simply trying to appear more important in social interactions, self-objectification takes up a large portion of our time and mental energy. In each of those cases, we are aware that we are the ones expending that time and energy. When it becomes clear that someone else is doing the exact same thing to us, the objectification is petrifying. We become aware that we have little control over the identity which is typically thought of as our own.

This consequence is rarely considered when the subject first attempts to write the self. In fact, the desire to externalize the self is often a primary motivating factor for starting to write. A constantly growing portion of the population uses the internet as a tool to write their own identities, either implicitly or explicitly. However, the looming threat that one will realize the distance between their personal and public selves is always present. Digital objects possess both an extreme remoteness and an incredible precision that hides and sharpens this threat. Our ability to immerse ourselves in that which is so physically removed from us makes intensely particular objects of everything, replacing much of the narrative process involved with one’s own being with something more like a chain of impulses and perceptions. Just as it does for Funes.

However, it is necessary to point out that Funes’s perfect perception eliminates any need for him to narrate his own identity, while contemporary culture still demands this narration. The closest Funes comes to personal narration is when he attempts to catalog his past experiences, with each of his days reduced to seventy thousand experiences which he would define by numbers.[19] The perfection of his memory is both the condition of possibility for starting the project and the condition of impossibility for its completion. Every last one of his memories is “more detailed, more vivid than our own perception of a physical pleasure or physical torment.”[20] Each moment is so clear and intense that it is impossible to evaluate which of them might be more important than the rest.

The technological mediation of the internet allows us our own (significantly dulled) version of this as we travel to places we never thought we’d go and experience things digitally that our social standing, physical bodies, or morally coded psyches might not allow in a non-digital space. Further, we make ourselves accessible on a grand scale. When we explore digital space, we are encouraged to share our findings for the world to read. Friends and strangers alike can read and critique our thoughts on these previously impossible discoveries. The varied perceptions that the internet allows us to experience all come through the eyes of others who act as our guides and judges.

Ricoeur sees the alienation of the written self clearly. “Just where the work is separated off from the author, its entire being is gathered up in the signification that the other grants to it.”[21] These externalized selves become works, rather than identities, despite the fact that their authors are very likely to identify them as an expression of selfhood. Each incarnation of digital identity that one comes across online remains nearly immediately accessible at all times to anyone with an internet connection, giving them the right to grant being to our written selves. As one grows and ages, the gap between personal narrative identity and public written identity widens. I become estranged from the thoughts I once had, despite their perpetual availability to me and to others.

This is not a problem unique to any one person. The written selves of others are also always potentially present. I see the past of other people as present when I read their old tweets. There is voyeuristic pleasure in making another person present in this reduced state. These thoughts are offered up for interpretation, not of the ideas expressed, but of the person who has expressed them. Instead of encountering others, the constant representation of selfhood by all parties causes me to encounter already objectified characters with whom I interact in impersonal public forums. We become the masters of the identities of others. The written selves that we bring into being remain distinct from the person who wrote them, but the link between these selves remains. If the written self is treated as a work of art, then their original authors are as well.

The problem here is in the objectification found in memory, or more accurately, in a lack of forgetting. As more and more of our identities are found exclusively online, we must understand that to forget in the digital sphere is impossible. Being forgotten online is simply to have the self fall into oblivion.[22] Our written selves can only be fully forgotten when they do not exist or allow for the past to perpetually exist as an unlived object for examination. When our narrator in “Funes, His Memory” realizes his status as an atemporal object for Funes, he ceases to narrate, allowing that period of time and his presence in it to be forgotten. He chooses oblivion over objectification, but only does so after coming to understand his status as object to Funes. His refusal to write is a refusal to allow the power that Funes had over his identity to be passed along to his future readers.

The Need to Forget

Forgetting can be embraced. A rethinking of the forgotten oblivion in terms of the unthought, the unthinkable, the radically other is required. When our narrator in “Borges and I” tells us that “My life is a point-counterpoint, a kind of fugue, and a falling away—and everything winds up being lost to me, and everything falls into oblivion, or into the hands of the other man,”[23] we cannot read this statement as a declaration of the collapse into non-existence of the narrator or the objects of his perception. We must think of the narrator’s life not in biological terms, but in narrative terms. The forgetting and oblivion found in Borges’s work must be looked at as the falling away of self-knowledge to become a sort of social nullity. When our narrator’s unwritten identity veers toward the void, he maintains his ontological status. Forgetting and oblivion must be linked to the phenomenal appearance of non-objectivity, the alterity that escapes all narratives, but is always hinted at beneath the logos which makes each story possible.

By pushing more of the processes of identity creation into the permanent, digital realm, we are presented with the opportunity to step away and live in a world that routinely forgets who we are and allows us to forget our objectifications of others. The people who are seen as social objects in our everyday lives can simply be seen as wholly other, outside of my world, non-objects. Those people do not matter to my written self, and can thus be seen outside of the constraints of social expectations. For this to be possible, we must learn to disentangle our written selves from our narrative selves, and to allow for others to do the same. Allowing the personal, Ricoeurean sense of self to plunge into obscurity in the moments when I choose not to read or write presents us with the opportunity to eschew selfish action in non-digital spaces. There is less reason than ever for me to present an idealized version of myself in face-to-face interactions with others. This certainly could result in a devolution of moral thought and action in traditional social spaces. In particular, dulling our sense of shame could result in a great deal of honest, unburdened, awful interactions with others in non-digital spaces.

Our sense of respect for the very existence of others would need to be strengthened in order to combat the de-emphasizing of personal identity-based mechanisms for cultivating ethics in a newly digital world. This would require a rethinking of ethical development, one that our culture may be unprepared to undertake at this moment. However, that danger is accompanied by the possibility to accept others as they come to us, rather than project social expectations on to them. The moments in which I break fidelity to my narrative sense of self are the moments in which I am most free to respond to others as they are, not how I believe I ought to respond to them to be in a particular social context.

Instead of battling against the permanence of objectification in the digital world, we can turn into the spin. The digital world can be where identity is petrified in writing, while selfhood is forgotten elsewhere. When our sense of self is permanently present online, taking leave of that rigid structuring of self is more necessary than ever. It is impossible to maintain a fidelity to written selfhood, simply because the way we see ourselves is so easily altered in day-to-day experience. Those day-to-day experiences, hidden away from the public projection of identity, can form the basis of a personal, narrative identity to which we owe no fidelity. A more auditory mode of being in these situations can create a more harmonious existence with others that denies the need for the interiority created by writing, all while holding on to the social gains won by the written word. Therefore, away from my selves written through technological mediation, I am free to find relationships beyond social abstraction. I can find myself in a physical space designated for forgetting. For encountering others not as objects, but as others.


Borges, Jorge Luis. The Aleph and Other Stories. Translated by Andrew Hurley. New York: Penguin. 2000.

Borges, Jorge Luis. Fictions. Translated by Andrew Hurley. New York: Penguin. 2000.

Ong, Walter. Orality and Literacy. New York: Routledge. 1982.

Ricoeur, Paul. Oneself as Another. Translated by Kathleen Blamey. Chicago: University of Chicago Press, 1992.

[1] Borges, Jorge Luis. “Borges and I.” The Aleph and Other Stories, 177.

[2] Borges, Jorge Luis. “Borges and I.” The Aleph and Other Stories, 177.

[3] Ong, Walter. Orality and Literacy, 71.

[4] Ibid, 71-72.

[5] Ong, Walter. Orality and Literacy, 54.

[6] Ricoeur, Paul. Oneself as Another, 142.

[7] Borges, Jorge Luis. “Borges and I.” The Aleph and Other Stories, 177.

[8] Ricoeur, Paul. Oneself as Another, 121.

[9] Borges, Jorge Luis. “Borges and I.” The Aleph and Other Stories, 177.

[10] Ibid.

[11] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 91.

[12] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 97.

[13] Ibid, 98.

[14] Ibid, 91.

[15] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 97.

[16] Ibid, 96.

[17] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 98-99.

[18] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 99.

[19] Borges, Jorge Luis. “Funes, His Memory.” Fictions, 98.

[20] Ibid, 99.

[21] Ricoeur, Paul. Oneself as Another, 156.

[22] In Borges’s native Spanish, as well as most Romance languages, the words for forgetting and oblivion are etymologically linked.

[23] Borges, Jorge Luis. “Borges and I.” The Aleph and Other Stories, 178.

Author Information: Robyn Toler, University of Dallas

Toler, Robyn. “The Progress and Technology of City Life.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 78-85.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Simon & His Camera, via flickr

The Progress and Technology of City Life

The products of today’s technology deserve scrutiny. The mass media and pop culture exert a powerful influence on Americans, young and old alike. Opportunities for quiet reflection are few and far between despite our much-trumpeted age of convenience. Scientific advancement is not necessarily accompanied by wisdom. In fact, in embracing the new and shiny it is easy to cast aside the tested and proven. Sometimes we tear down fences before finding out why they were built. Likewise, the alluring products of technology deserve some scrutiny before being accepted. The cool, rational consideration needed is all the harder to engage in because of the onslaught of the sensational. Still, prudence suggests that the best course is to take a step back and ponder the choices before us.

Under the influence of the “culture industry,” as described by Theodor Adorno and others, perpetual distraction and artificial consensus crowd out of people’s lives the solitude and individuality required for cultivating critical, independent thought, and the courage for following their own reasoned convictions. Sensitization to the mechanisms used by the culture industry can help audiences more effectively resist them, and preserve or regain an authentic experience and view of life. Some view technological advancements as unqualified goods by virtue of their nature as modern and scientific; however, the gains produced by these technologies bring their own attendant complications, such as compromised privacy, continuous availability to the workplace, and the stress of an externally imposed life rhythm over a natural, personal ebb and flow of work and leisure.

This article challenges the argument that technological advances have made work easier, created more time for leisure, decreased stress, increased satisfaction in relationships, simplified tasks, and made jobs less time consuming, resulting in a net benefit to lived experience.

While people in rural as well as urban locations can easily be involved with technology in many parts of the world, including being connected to the internet, city life has some clear contrasts to country life. Population density is higher in the city. The environment is noisier. Traffic, construction equipment, and the many people in close proximity all contribute to the volume. Limited green spaces reduce exposure to a variety of natural features like plants, birds, and bodies of water. The city is also filled with opportunities to interface with technology. Subway tickets, toll tags, video displays, elevators and escalators, point of sale terminals, smart phones, identification badges for areas with controlled access, passports, and games are just a few of the high-tech items an average person deals with in a normal day in the city.

Examining the Western philosophical tradition, especially from Kant onward, Adorno’s writings cover a wide range of topics including music and literary criticism, aesthetics, mass culture, Hegel, existentialism, sociology, epistemology, and metaphysics; however, his work on what he termed “The Culture Industry” is especially pertinent to understanding the dynamics of life in the city. In his article “How to Look at Television,” from The Culture Industry, Adorno reveals his thoughts on the influence of popular culture. He warned in 1957, soon after the advent of television, that it can produce intellectual passivity and gullibility (166). Current consumers of internet entertainment should heed his admonition, guard their powers of reason, and be wary of technology’s ability to hypnotize and immobilize. (cf. Reider 2015).

Boredom, unlike the hypnotic effect Adorno warned against, is not only an unavoidable part of life, it is the wellspring of creativity. Overscheduling, avoiding monotony at all costs, robs potential artists, poets, scientists, and inventors of their motivation to generate plans and projects. Boredom has developed a bad reputation as a companion to depression and vice and a precursor to mischief; but “research suggests that falling into a numbed trance allows the brain to recast the outside world in ways that can be productive and creative at least as often as they are disruptive” (Carey 2008). As another researcher indicated,

When children have nothing to do now, they immediately switch on the TV, the computer, the phone or some kind of screen. The time they spend on these things has increased. But children need to have stand-and-stare time, time imagining and pursuing their own [emphasis added] thinking processes or assimilating their experiences through play or just observing the world around them. [It is this sort of thing that stimulates the imagination while the screen] tends to short circuit that process and the development of creative capacity (Richardson 2013, 1013).

Some would argue that “switching on the TV” is “doing something;” however, Richardson asserts that imagining, observing, and mentally processing experiences are more valuable.

Leisure and the Workweek

Max Gunther’s The Weekenders takes an amusing yet probing look at the leisure time of Americans. It is particularly interesting to note that this book was published in 1964. While some of the pastimes available have changed, human beings are mostly the same. One of the most distinctive features of city life is scheduling. Busses run on a schedule. School bells, business meetings, and garden clubs stay on schedule so their participants can meet their next obligations. The use of leisure time and how it is incorporated into schedules is particularly interesting. Insights can be gained by studying the movement from an organic life rhythm to an arbitrarily imposed “five days on, two off” schedule, the perceived pressure to be productive during hours away from one’s paid employment, and the tendency to be connected continuously to one’s work through the technological mediation of devices such as smart phones (cf. Drain and Strong 2015).

People in the pre-internet years tended to look at their leisure time, primarily the weekend, as wholly separated and different from the workweek. Of course, there were always the workaholics, but as a national trend, the weekend seemed different. They wore different clothing and participated in different activities, all with a different attitude. Sixty-two hours were partitioned off from “work” to be spent in “leisure.” Divisions during the week between different professions were blurred on the weekend, and everyone, except that unfortunate segment whose businesses hummed on throughout the weekend, took up similar pursuits. “[City dwellers] can no longer work and play according to the rhythms of personal mood or need but are all bound to the same gigantic rhythm: five days on, two off” (Gunther, 10-12; cf. Ellul 1964; cf. Kok 2015). While one would expect that all this “leisure time” provided by the efficiency of industrialization would lead to a slower pace conducive to relaxation, quite the contrary seems to be the case.

Families plunged into furious activity on those days ostensibly set aside for leisure. The weekend was by and for the middle class. Ads were aimed almost exclusively at them. Students were weekenders in training. Sports, play, eating and drinking, cultural arts, church, and civic volunteering all took their share of available time. Although these activities sound pleasant, the real result was a vague insecurity and bewildering Monday fatigue. As Gunther appropriately pondered, it is not clear whether the fatigue was generated by the energy expended in reaching goals or by pent-up, unrelieved tension (Gunther 13-15).

Travel also occupied the weekenders of the 60s. Weekend trips, day trips, outings to events and places of interest, and visits with friends vied for attention. This may have been genuine curiosity about the world and fellowship with neighbors and loved ones, or something else. All that travel and dining out was expensive, even back then. Aggressive driving increased on the weekends, too. It is unclear what drove this restlessness, what inner devil goaded those mid-century weekenders, what they were so desperately seeking. Yet, it is clear that there were high expectations for leisure time, and somehow despite all the recreation, those two days off frequently disappointed (Gunther 16, 21). Technological advances have continued, but the expectations and restlessness do not seem to have abated.

Close and So Far Away

Even though residents in the city are in close proximity to one another, the trend toward social media and away from direct personal interaction has grown. Relationships in the city are heavily influenced by technological mediation. Perhaps limited access to natural settings pushes city dwellers indoors, and into virtual spaces. Sites designed to facilitate dating, networking, creative pursuits, and games, among other activities, have sprung up. The internet “surfing” is always fine because someone else is constantly adding new, tantalizing information. It is the epitome of content “crowdsourcing.”  Social networking sites provide crowds of people who create content, usually out of their own experiences, for the entertainment of others as they browse. Potential romantic interests, job openings, and decorating ideas are perpetually at the ready, with new ones popping up moment by moment. This makes it difficult to break away. Suspense and expectation create enticement. Every genre of social media has its niche and its devotees, but perhaps the most pervasive and invasive of them all is Facebook, with its plethora of “friends.” It is ironic that in cities with their high population density, online, virtual “meetings” are so popular.

Begun as a forum for college students, this social media giant has grown to include anyone who wants to join, with a non-stop, real-time feed of “Status Updates.” Founded by Mark Zuckerberg and some college classmates at Harvard, within 24 hours of its launch the site had over 1200 registrants. Private investors became involved and the company expanded. Facebook acquired a feed aggregator and then the photo site called Instagram. The company made its initial public offering (IPO) in 2012 valued at $104 billion. A new search feature was rolled out in 2013, and changes continue, including an opt-out feature that makes it the user’s responsibility to raise security settings from their lower, default positions. Today one in seven people is a member (Zeevie 2013). The desire to know instantly about the next update to appear in the feed—a great picture, word of something earthshaking in a “friend’s” life, a joke, a political rallying cry—can be addicting. In cities large and small, people often observe each other online in addition to, or instead of, from their front porches.

The moment-to-moment observation of others’ activities through monitoring their posts is not the only aspect of social media that makes it enticing, though. The ability to stay connected with all the people you have ever known—provided they are on Facebook—is a big draw. Consider the evolution of the address book. Years ago a small booklet next to the telephone held all the names, addresses, and phone numbers of the people one most frequently called or corresponded with. As social circles expanded and families became more mobile, address books expanded as well. The inconvenience of constant marking out and erasing information of friends and relatives that moved led to loose-leaf notebooks and index card files. The Rolodex system with its easily interchangeable cards was born, facilitating an ever-growing collection of constantly changing contact information.

Now leap ahead to the electronic version of the address book, the Palm Pilot. It was a utilitarian miracle and a status symbol in one! Then, just as carrying an address book gadget plus a cellular phone became tiresome, the technology merged to produce one convenient device to do both jobs—the smart phone. Cloud data storage debuted to protect data from hardware problems and to make information accessible anywhere with connectivity, cellular or wi-fi. Mail made a similar metamorphosis from postal mail (“snail mail,” referencing its comparatively slow delivery time) to electronically delivered “email,” to web-based systems like gmail. Now networking platforms like Facebook, and LinkedIn for professionals, are widening the messaging options further. Contacts are accumulated over time, surviving any number of physical moves by users, and stored remotely for ubiquitous access. For better or worse, the days of hunting for a scrap of paper with someone’s number on it are over.

Even though city dwellers have all those connections with all those people, and they could be interacting face to face with those nearby, they all too often choose online forums over personal meetings. A large segment of their connectivity is online instead of in person, and it has a negative side. Virtual personalities allow a spectrum of falsity ranging from simply curating one’s image to advantage, to manufacturing a fully fake identity. The self-absorbed use Facebook to promote themselves, not connect with others. Furthermore, instead of enhancing the ability to read social cues and body language, excessive time online erodes these crucial social skills (Kiesbye 55, 58-9). Facebook actually interferes with friendships rather than strengthening them. It seems that social needs would be more effectively met by simply arranging to meet in person, in the city environment with its physical proximity and variety of venues, instead of retreating behind a computerized mediator.

Some cite city crime statistics as a reason to retreat from malls, parks, and other public places. But new categories of crime and vice have arisen or proliferated on the internet. Somewhat, though not altogether, different from face to face encounters on sidewalks and in elevators, it is difficult to know with whom you are dealing on social media. Despite assurances by site administrators, malevolent users can easily misrepresent themselves, luring the young and naïve into dangerous, sometimes fatal, encounters. Teens’ desire for premature autonomy and willingness to lie to their parents in order to sneak off and meet someone surreptitiously complete the potentially tragic scenario (Luna 196-8). “Sexting” over cell phones, and now “sextortion,” have been introduced. Teenagers are particularly vulnerable to this kind of deception. They are notoriously “easy to intimidate, and embarrassed to tell their parents” when their judgment proves poor and plans go awry (Luna 196-7). A young person who carelessly snaps a compromising photo of himself (or is digitally captured by a companion) can be parlayed into a source for a self-incriminating file of pornography by an online predator.

Privacy and anonymity can be viewed two ways in the city. There can be anonymity in a crowd, yet we are captured on camera throughout the day at businesses, traffic lights, and elsewhere. With the exception of satellite surveillance, that type of tracking is rare outside the city. It is difficult to estimate how we modify our behavior because of this “watching.” City life also presents an opportunity for deception and abuse in privacy breaches. Privacy issues in public, in private, and on social networking sites concern politicians and culture critics. The high quantity of pictures posted on Facebook is a valuable source of data for anyone trying to match faces with identities.

In a study led by Alessandro Acquisti of Carnegie Mellon University, information from social media sites including Facebook was combined easily with cloud computing and facial recognition software to identify students on a campus (Luna 121). Whether or not students object to this, their parents may find it disconcerting that the children they have just released into the next phase of their growing independence can be surveilled in this way. Citizens who value their privacy will have a difficult time maintaining it in the age of Facebook, whether or not they are or ever have been subscribers. Friend lists yield copious amounts of information, and trails remain to anyone mentioned or pictured. Even non-subscribers can gain access through search engines (Luna 204-5).

Those who think they are too old or too cautious to become crime victims should consider how their online personas could still have negative repercussions for them. Potential employers and college admissions personnel routinely check their applicants’ presences on social networking sites. Students are careless about their passwords, allowing “friends” to make embarrassing posts in their names. Employers and administrators do not know or care who created the posts, but when they see information that makes a user look bad, they are likely to move on to more appealing candidates to fill their available positions (Luna, 199). Facebook does not cause people to lose opportunities, but it guarantees that many people will see it if you make a mistake.

If predators, lowered productivity, narcissism, and shortened attention spans are not enough incentive to reconsider one’s entanglement with social media, here is a puzzle to ponder: anyone actively attempting to conceal his identity or whereabouts will have a difficult time in the age of social media. This is a coin with two sides. While it seems appealing for local law enforcement and federal Homeland Security to be able to track and locate a suspect, honest citizens who just want to remain anonymous may rightly feel violated knowing that their every traffic decision, subway stop, casual comment, and convenience store errand is at least captured, and possibly monitored in real time. Movies and television shows like Fox’s popular series 24 demonstrate the use of this technology and promote its acceptance—even demand. Before capitulating to the easy solution of simply watching everybody all the time, think about whether that kind of scrutiny is really desirable or acceptable.

A Perfect Day

For a comparison between today’s city life full of electronic gadgets and software and a time before computers, or even electricity, had reached much of rural America, the following poem depicts a different way of life. Neither electronic entertainment nor boredom would have intruded on the grandmother portrayed in this poem. While physically busy, she would have had more opportunity for contemplation than most modern city dwellers.

Perfect Day

Grandmother, on a winter’s day,
Milked the cows and fed them hay;
Slopped the hogs, saddled the mule,
And got the children off to school.
Did a washing, mopped the floors,
Washed the windows and did some chores,
Cooked a dish of home-dried fruit,
Pressed her husband’s Sunday suit.
Swept the parlor, made the bed,
Baked a dozen loaves of bread,
Split some firewood and lugged it in
Enough to fill the kitchen bin.
Cleaned the lamps and put in oil,
Stewed some apples she thought might spoil,
Churned the butter, baked a cake,
Then exclaimed, “For mercy sake the calves have got out of the pen!”
Went out, and chased them in again.
Gathered the eggs and locked the stable,
Back to the house and set the table,
Cooked a supper that was delicious,
And afterward washed all the dishes.
Fed the cat, and sprinkled the clothes
Mended a basket full of hose,
Then opened the organ and began to play:
“When you come to the end of a perfect day!”—Author Unknown (Kaetler 54-5).

Cooking, cleaning, farm chores, organization, time management, nurturing behaviors, and aesthetics are all on display in this narration. Facebook would have been a shallow substitute for the creative work accomplished on this day, leaving the industrious grandmother with the same vague dissatisfaction as Gunther’s “Weekenders,” mentioned earlier.

Technological progress is here to stay, with or without a given individual’s active participation; but users can take steps to stay in control of their data and their minds. The advantages of dialing back technology are delightfully and creatively narrated in the book Better Off. In it author Eric Brende chronicles the lifestyle journey he and his wife made in search of the minimal amount of electronics and machinery necessary to optimize life for them. After spending an extended time living in a rural community that rejected almost all labor-saving devices, they concluded that they were happier and “better off” without most of the expensive, encumbering accouterments of 21st century life most of us take for granted. He ends his book by saying

… in all cases [technology] must serve our needs, not the reverse, and we must determine these needs before considering the needs for technology. The willingness and the wisdom to do so may be the hardest ingredients to come by in this frenetic age. Perhaps what is needed most of all, then, are conditions favorable to them: quiet around us, quiet inside us, quiet born of sustained meditation and introspection. We must set aside time for it, in our churches, in our studies, in our hearts. Only when we have met this last requisite, I suspect, will technology yield its power and become a helpful handservant (Brende 232-3).

Brende and his wife found the life balance that suited them away from the city before rejoining it. His focus on quiet and control are aptly put.

Adorno stated that modern mass culture has been transformed “into a medium of undreamed of psychological control (cf. Guizzo, 2015; cf. Scalambrino, 2015). The repetitiveness, the selfsameness, and the ubiquity of modern mass culture tend to make for automatized reactions and to weaken the forces of individual resistance” (Adorno 2006, 160). Preserving solitude, concentration, independent thought, and courage are worth the effort it takes to resist the popular culture. The culture industry will continue to usurp the territory of life wherever it is allowed to, within or outside of the city; but vigilance can give it boundaries.


Adorno, Theodor W. The Culture Industry: Selected Essays on Mass Culture. London: Routledge, 2006.

Brende, Eric. Better Off: Flipping the Switch on Technology. New York: Harpercollins Publishers, 2004.

Carey, Benedict. “You’re Bored But Your Brain is Tuned In.” New York Times August 5, 2008. (accessed May 13, 2014).

Drain, Chris, and Richard Charles Strong. “Situated Mediation and Technological Reflexivity: Smartphones, Extended Memory, and Limits of Cognitive Enhancement.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 187-197. London: Rowman & Littlefield International, 2015.

Ellul, Jacques. The Technological Society. Translated by J. Wilkinson. New York: Vintage Books, 1964.

Guizzo, Danielle. “The Biopolitics of the Female: Constituting Gendered Subjects through Technology.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 145-155. London: Rowman & Littlefield International, 2015.

Gunther, Max. The Weekenders. Philadelphia, Pennsylvania: J. B. Lippencott Company, 1964.

Kiesbye, Stefan, editor. Are Social Networking Sites Harmful? Detroit, Michigan: Greenhaven Press, 2011.

Kok, Arthur. “Labor and Technology: Kant, Marx, and the Critique of Instrumental Reason Vanishing Subject. Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 137-144. London: Rowman & Littlefield International, 2015.

Firestone, Lisa. “Are You Present for Your Children?” Sussex Publishers, LLC. May 5, 2014. (accessed May 10, 2014).

Luna, J. J. How to Be Invisible. New York: Thomas Dunne Books. 2012.

Reider, Patrick. “The Internet and Existentialism: Kierkegaardian and Hegelian Insights.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 59-69. London: Rowman & Littlefield International, 2-15.

Richardson, Hannah. “Children Should Be Allowed to Get Bored, Expert Says.” BBC. March 22, 2013. (accessed May 14, 2014).

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015. “Erma Bombeck’s Regrets: A dying Erma Bombeck penned a list of misprioritizations she’d come to regret?” September 29, 2009. (accessed May 13, 2014). “Grandma’s Wash Day: Description of how laundry was done in bygone days?” August 23, 2008. (accessed May 13, 2014).

Unknown. Ltd. Australian Media Pty. 2000. (accessed May 13, 2014).

Zeevi, Daniel. “The Ultimate History of Facebook [INFOGRAPHIC].” SocialMediaToday. February 21, 2013. (accessed May 13, 2014).

Zendaya, Sheryl Burk. Between U and Me. New York, New York: Disney-Hyperion Books, 2013.

Zuidervaart, Lambert, “Theodor W. Adorno.” The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), edited by Edward N. Zalta. (accessed May 13, 2014).

Author Information: Francesca Malloggi, University of Amsterdam

Malloggi, Francesca. “The Value of Privacy for Social Relationships.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 68-77.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Daniel R. Blume, via flickr

This article discusses the relations between privacy, public interest, and democratic ideals. Specifically, in regard to privacy, the value of privacy for social relationships is discussed. This extends beyond privacy of the individual and includes the keeping private of social relationships beyond the more individualistic understanding of privacy as secluding one’s self and activities from public awareness. In regard to public interest, we live in a world where information is gathered about individuals through their technologically-mediated relation to the environment. The relation to democratic ideals is twofold. First, it seems to follow that from a lack of privacy that citizens may lose the capacity for civil disobedience and possibly the freedom to pursue happiness. Second, though the privacy of groups constituted by social relations may require privacy, such privacy should not be extended to surveillance activities of a political group, for example those accomplished through acquiring data resulting from technological mediation.

I first present the positions of James Rachels and Charles Fried to illustrate the value of privacy in establishing intimate relationships. Combining these positions with the perspective on privacy for which Roessler and Mokrosinska allows us to examine both the value of privacy for social relationships and democratic ideals, since they clearly illustrate why privacy should be defended for the sake of social relations. The possibility to defend privacy as a civil liberty, I suggest, is of fundamental importance for the opportunity of exercising our rights as citizens in a democratic state. Yet, though I argue for the privacy of groups, I conclude by indicating the danger of extending privacy to the State. This is counter to the position for which privacy theorist Alan Westin advocates. For example, I invoke instances regarding technological mediation in which the State seems to have already to invade the privacy of citizens, at the individual and the social level. Such invasions lead to the minimization of trust and the constraint of identity.

Controlling Access

Since at least the establishment of social media use of the internet has moved from a situation of anonymous persons to a situation in which users are specifiable and univocally identifiable persons. Though this may result in a more “personalized” experience of the internet, it has also had a significant impact on privacy debates. Early authors on privacy issues were mainly concerned with the importance of privacy on the individual level, that is, in its significance for the person. Further, as is my main concern in this article, authors increasingly recognize the fundamental role of privacy for the social dimension. For example, as Roessler and Mokrosinska noted,

In contemporary privacy scholarship, the importance of privacy has mostly been justified by the individual interests and rights it protects, the most important of which are individual freedom and autonomy in liberal-democratic societies. From this perspective it is the autonomy of individuals that is at stake in protecting the privacy of personal data and communication in the digital era. This perspective, however, seems insufficient to account for many other concerns raised in the debates on privacy-invasive technologies. With ever greater frequency, privacy-invasive technologies have been argued to endanger not only individual interests but also to affect society and social life more generally (Roessler and Mokrosinska 2015, 2).

In this way, we may come to see that a violation of privacy on a social level may undermine the trust among citizens and the possibility of a democratic society.[1] In this way, I aim at showing that if we failure to protect the sphere of social relationships may be failure to defend a democratic state.

Because privacy may allow for an individual to develop and flourish, privacy may be seen as related to autonomy, dignity, and individual integrity. Privacy creates the space for private thoughts so that an individual may be free to engage in self-exploration in a process of discovering and determining identity. I suggest autonomy is not automatic, but it requires exercise through social relationships in order to shape my personality and opinion as a citizen. Taking Roessler’s broad definition of privacy we read that “something counts as private if one can oneself control the access to this something”.[2] Of course this includes control over emotional, in addition to, intellectual states. Therefore, I need to have the possibility to exercise control and to be protected from unwanted access[3] not only in the context of the individual sphere but also in the context of social relationships. As Charles Fried reminds us “of the various thoughts that appear in one’s mine, discretion in selecting which of these to present, and in which contexts, in central to an individual’s ability to be a certain kind of person” (Schoeman 1984, 22).

Similarly, Robert Gerstein’s work shows that “intimate communication, and intimate relationships generally, involves the parties as participants and not as observers. However, involvement as a participant can be transformed by becoming aware that one is being observed and judged” (Schoeman 1984, 23). Finally, the above insights from Gerstein and Fried may be seen extended into the political dimension by the work of James Rachels. Rachels’ writings about privacy[4] brought attention to aspects of the discussion which may have been previously underestimated. He focused on privacy as the institution that enables an individual to carry on his personal interests, protecting people from embarrassment and harassment. This includes the privacy we rely on to carry out “our business,” and thereby the privacy we rely on to protect us from harmful interference with our professional life.

Privacy and Social Norms

Though it may be possible to publicly justify one’s actions and identity in terms of social norms, I take the position, on the one hand, that a person’s actions also involves the expression of the emotions in addition to adhering to social norms, and, on the other hand, that identity is not univocal and may relate more to multi-dimensional aspects of the self than social norms across diverse contexts. For example, Rachels defines his idea of privacy as “the ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people”.[5] Though this definition is a helpful way to focus autonomy, dignity, and integrity in regard to privacy onto the social dimension, it may overestimate the role of social norms. That is to say, Rachels stresses how, according to each different social relationship, there are “fairly definite patterns of behavior that we associate to them”.[6] The example of the father and businessman, supposedly joyful and thoughtful with his child, respectful with his mother, playful with his friends,[7] and a good leader for his workers, thus sheds some light on at least two points. First, Rachels says we should respect different patterns of behaviors in order to satisfy our roles in different kinds of relationships, but secondly, he observes, such different patterns are neither symptoms of inauthenticity or inconsistencies of the person.[8] In other words, behaving in many different ways doesn’t mean wearing ‘different masks.’ [9] Rather, we naturally create some room to show the appropriate aspects of our personality in accord with the contexts. This means our actions and identity constitute a selective disclosure of information about ourselves.

In this way, it seems the concept of norms is not enough to grasp what ‘privacy’ is, being such a fundamental need for every individual which allows the person to create meaningful and important relationships. Moreover, multi-dimensional personality seems more primary in regard to autonomy and dignity than conforming to social norms. That is to say, persons are multi-dimensional despite the existence of social norms. Privacy allows individuals to maintain different social roles in different social settings—if it were just as simple as participating in one norm or another, then perhaps there would be less need for privacy. Though privacy allows people to conform to different social norms, and the identity of a person may be understood through the person’s participation in social norms, were this simply the case there should be no instances in which a person finds social norms oppressive.

Thus, it seems we can still have a level of privacy despite the presence of social norms and understand an oppressive social context as an example of people acting in accord with the more private aspects of a multi-dimensional personality without social acceptance.

Privacy, Surveillance, Technological Mediation and Trust

A radical analysis of the right of privacy, exploring the moral foundation of this concept, has been advanced by Charles Fried. He takes as enlightening the example of the exercise of personal monitoring to probation and parole.[10] Whether people can decide to remain in prison or be under surveillance around the world can help us to see the fundamental role that privacy plays in our lives and social relationships. Since the use of monitoring seems to be justified by the release of the person, the question remains regarding such surveillance and personal dignity.

Through the violation of privacy brought on by such surveillance, Fried wants to bring to our attention the notion that privacy has more value than the instrumental value to pursue one’s interests; rather, privacy has an intrinsic significance for us. He argues that privacy is more than a simple mean, but “it is necessarily related to ends and relations of the most fundamental sort: respect, love, friendships and trust”.[11] It may be the case, then, according to Fried, that without privacy social relationships would not be possible, since we would not consider ourselves free to love, free to be a friend as well as to be the object of love and friendship. “To make clear the necessity of privacy as a context for respect, love, friendship and trust is to bring out also why a threat to privacy seems to threaten our very integrity as persons.” [12]

Fried notes, “trust is an attitude of expectation about another person.” [13] He examines the example of love. In the case of love, there is a “spontaneous relinquishment”[14] of constraints on the rights of Others. Yet, on the one hand, Fried depicts trust and surveillance as incompatible. “Trust, like love and friendship, is in its central sense a relation: it is reciprocal.” Therefore, he characterizes the relation in which individuals are under surveillance noting, “We do not trust them, and they have no reason to trust us in the full sense of a relationship of mutual expectation, for our posture towards them is not one of cooperative mutual forbearance but of defensive watchfulness.” [15] On the other hand, according to Fried, it is the relinquishment of each other’s rights which conditions the intimacy of social relations.

Extending Fried’s argument, it appears that privacy is not only important for the development of our personality, impinging on privacy is an attack on trust and, consequently, an attack on autonomy, which is why the state of increased public-surveillance should be resisted (cf. Scalambrino, 2015). While Fried takes as an example the case of personal monitoring being applied to actions of probation and parole, I maintain that such an example should not be taken as metaphorically as it initially appears. In the era post-Snowden we know that the privacy of citizens has been disregarded, for example, by the US government and by the European Union as well. Also, some people are voluntarily heading in the direction of voluntarily monitoring themselves. The difference between these two cases seems that there is public self-awareness regarding the lack of privacy resulting from the technological mediation of monitored people on probation. They know they are watched and judged. Yet, in the case of “free” people, there is less public self-awareness.

Looking at the Microsoft project called MylifeBits we can see that Professor Gold Bell found a way to put every moment of human life under surveillance and give people a lifetime store from which to ostensibly retrieve a life from one technologically-mediated dimension. Using a small bracelet device, the program can store every picture we take, every web page or article we read, letters, cards, books, movies, and so on. It can record and store every phone call as well as every conversation we have. This is the power of technological mediation. Similarly, a project called Lifelog can track our social, entertainment and physical activities spanning your social relationships. Indeed, you can ask to your partner to connect the bracelets through which the project technologically-mediates your relations so that you will always have access to their information also. People can have access to all the places in which their partners have been, listening to the conversations their partners have had, and even monitoring their meals and how much time their partners have spent sleeping and so on.

Such projects are not only problematic for the individuals but for the social relationships also, since in order to thrive they need to be based on trust and personal freedom and not on control. Even at just the most practical level, as I have shown above, participants in social relationships being surveilled through their technologically-mediated relation to others will know everything they do and say is being recorded. This would affect every kind of relationship, from the most intimate ones to the most professional. Thus it shows privacy is an integral part of every relationship, and trust depends on it. Moreover, even if the amount of data collected is supposed to be protected, there are numerous examples today in which participants of various social media sites technologically-mediating their social relationships have essentially been “blackmailed” by “hackers,” (e.g. “Ashley Madison”). As it currently stands, you can check out the statistics of other Lifelog users and see how much people sleep, walk, eat and talk.

Privacy as Moral Capital

Personal security of privacy should be extended further, beyond the individual, to social relationships. Social relationship are, of course, the foundation of society. The trust in which we are engaged in social relationships is not only important for the single individual but also for the relations themselves. Thus, it is important, if not crucial, to recognize how privacy is a “moral capital”[16] not only for the individual but also for the relationships and the society as a whole. On the one hand, relationships are at the core of society, on the other hand, the variable nature of them shapes the world in which the subject lives.

The authors discussed herein have pointed out how privacy is important for human beings in a context that is predominantly private, within the sphere of the individual. Such views, I think, are lacking the completeness of the discussion about the role of privacy as a whole because the possibility to recognize the importance of privacy for the relationships themselves allow us to see the privacy rights on a broader scale; to see the role of privacy in the public sphere. Social relationships determine many contexts in our daily lives, from the professional one to the health care system on which we all have to rely, as well as to the place where we have to do mundane things, but that are still public spaces, and privacy is the substratum for every one of them.

The general framework of this section’s discussion is set within the context of informational privacy, specifically as discussed by Roessler and Mokrosinska, and in favor of their view as an intercommunicative perspective of the private and public spheres. In the information society the data we share with a friend, or with the health care systems, with the banks, with websites, and program like Lifelog, occur in a relational social dynamic the privacy of which should be defended. On this account, the social value of privacy has an intrinsic and an extrinsic meaning, both for the individual and for society altogether. The present post-Snowden era is bringing to light the imminent need of protection of personal data.

Roessler and Mokrosinska have identified three types of relationships in different spheres, they discuss privacy within intimate relations, professional relationships and the interaction between strangers. Rather than address each of these, I argue here that the importance of privacy can be seen in the context of a group. Specifically, the kind of group I discuss is a “political” group, in which citizens are involved in order to participate in current debates with social consequences. It is precisely at this level that surveillance is dangerous for the maintenance of a democratic society and the freedom of expression which conditions it. In fact, surveillance in a political situation leads to a kind of “psycho-political metamorphosis” described, for example, by Reiman pointed out[17] in that the individual only feels himself free to share in a group as long as he knows that his privacy is protected. Thus, defending privacy for a group means that we preserve an individual’s autonomy.[18]

Now, a potential objection may be that groups are not autonomous because autonomy accounts for intentional actions, beliefs, desires and we can speak about those just for the individual dimension. However, group autonomy can be defended against this objection because groups create their own reasons that are not reducible to individual members, since social choices are involved.[19] But not every institution has the right to privacy. The idea here is that the right to privacy allows individuals and groups to choose and act in accordance with their own beliefs, without being completely accountable. Whilst privacy in general is the right to be not accountable for personal beliefs, it allows individuals and groups to pursue their interests. Yet, an institution should not have the right of privacy since it must be accountable for the decisions it takes, that is, for the sake of the public interest.[20] A democratic state should allow people to not be ostracized for having certain political inclinations, whereas a public institution should be.

To illustrate my argument regarding institutions, consider how Snowden has been accused of violating the Espionage Act since he disclosed US government secrets. I argue that we should question the relationship between privacy and secrecy and evaluate what kind of secret can be allowed to institutions in social groups. For example, Snowden’s revelations have shown the secret and undemocratic program of global surveillance called the Five Eyes alliance established between USA, Canada, Australia, New Zealand and the United Kingdom. Furthermore, he disclosed the PRISM surveillance program from which the US government collects all the information of internet users from Google, Yahoo, Apple, Facebook and Microsoft. PRISM is a global program of surveillance since every user of in Europe and the US can be eavesdropped.

Snowden pointed out that the NSA is hacking civilian infrastructures such as universities, hospitals, and private businesses. But also private phone conversations, TVs in our houses, and the cameras of our laptops. PRISM was created as a military program of defense; however, by its own admission, as it stands now, it is inefficiently eavesdropping at the expenses of innocent people, since it has not prevented any terroristic attacks. Moreover, as of this writing, the NSA has not been able to provide an example of its surveillance dragnet preventing any domestic attack[21]. According to my argument, institutions such as governments, should not be allowed to keep secret such clandestine surveillance.

Technological mediation, of course, makes the surveillance easier for such institutions. For example, the presence of programs such as Lifelog make it easier for governments to collect otherwise private data, and without disclosure may actually be understood as “tricking” people into what amounts to voluntarily buying the very technological devices which will be used to surveil them. For instance, the privacy policy[22] of the new Samsung Smart TV says that they are capable of collecting data about us such as the TV programs we have watched, purchased, downloaded, or streamed. It is able to connect to Facebook, Twitter, and LinkedIn accounts and all the applications you have accessed through the SmartTV Panels. It records and stores the clicks on the “Like”, “Dislike”, “Watch now” buttons. In addition, the Samsung Smart TV has facial recognition and fitness services. So, if you take a picture of yourself they will know to who owns the TV—ironically—for “security purposes.” However, because the fitness service asks you to give them information about your height, weight and date of birth, so they can track your physical exercise, the reality is that they will have a wide spectrum of personal and private surveillance data at their disposal. What is more, Samsung disclaims that consumers “shall be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition”[23]. Voice Recognition is not only on the Samsung Smart TV but also on other devices such as the Moto X, Nexus, Amazon Echo, Microsoft Kinect and IPhone. The fear is, of course, that we are being surveilled, since these technological devices can record our private conversations and send them to third parties.

Protecting Personal and Collective Decisions

In this chapter I showed why privacy is not only important for the individual, as a moral right, but it is also socially important. We need the right to privacy because we want to protect our personal as well as our collective decisions. Without privacy protection, one can think that personal and group decisions are subjected to external pressure; therefore, they may feel their autonomy impinged. Though the State is a social group, rather than an individual, State privacy cannot be considered the same as for other groups. We cannot accept Alan Westin’s claims that the State has a right of privacy because to do so would make the State no longer accountable to their citizens. Individuals have the right of privacy, that is, to pursue their own business. People have the right to claim a defense of privacy in all their private and social contexts in order to pursue their interests and social identities in a democratic society.

In other words, a lack of State transparency regarding surveillance practices—such as those exposed by Snowden—seems undemocratic. There can be no civil disobedience or lobbying against State practices unless we know they exist. Thus we may now see how the right to contest the Intelligence Programs and the NSA surveillance program has its basis in the right of privacy of the individual for the sake of private and public relationships, as well as for the sake of the protection of our democratic values. Hence, bringing awareness to the public regarding the presence of such devices technologically-mediating our free time and our social relationships and the surveillance activities associated with them may strengthen our democracy and allow us to criticize practices inconsistent with democratic values.


Allen, Anita L. Uneasy Access: Privacy for Women in a Free Society, Totowa, N.J.: Rowman and Littlefield, 1988.

Brunson, Daniel J. 2015. “The End of Trust in the Age of Big Data?” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 37-47. London: Rowman & Littlefield International, 2015.

Fried, Charles. An Anatomy of Values: Problems of Personal and Social Choice. Cambridge, MA: Harvard University Press, 1970.

Gavison, Ruth. “Privacy and the Limits of Law.” Yale Law Journal 89 (1980): 421–71.

Inness, Julie. Privacy, Intimacy and Isolation, Oxford: Oxford University Press, 1992.

Kirkpatrick, David. The Facebook Effect: The Real Inside Story of Mark Zuckerberg and the World’s Fastest Growing Company. Virgin Books, 2011.

Rachels, James. “Why Privacy is Important.” Philosophy and Public Affairs 4, no. 4 (1975): 323–33.

Radder, Hans. “Technological Systems and Genuine Public Interests.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 27-37. London: Rowman & Littlefield International, 2015.

Reiman, Jeffrey H. “Driving to the Panopticon: A Philosophical Exploration of the Risks to Privacy Posed by the Information Technology of the Future.” In Privacies: Philosophical Evaluations, edited by Beate Roessler, 194-214. Stanford: Stanford University Press, 2004.

Reiman, Jeffrey H.  “Privacy, Intimacy, and Personhood.” Philosophy & Public Affairs 6, no. 1 (1976): 26–44.

Roessler, Beate The Value of Privacy. Cambridge, MA: Polity Press, 2005.

Roessler, Beate and Dorota Mokrosinska. Privacy and Social Interaction, Amsterdam, 2013.

Scalambrino, Frank. “The Vanishing Subject. Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 187-197. London: Rowman & Littlefield International, 2015.

Scalambrino, Frank. 2014. “From a Statement of Its Vision Toward Thinking into the Desire of a Corporate Daimon.” Social Epistemology Review and Reply Collective, 3, no. 10 (2014): 34-39.

Schoeman, Ferdinand D., editor. Philosophical Dimensions of Privacy: An Anthology. Cambridge: Cambridge University Press, 1984.

Schoeman, Ferdinand D., 1992. Privacy and Social Freedom, Cambridge: Cambridge University Press, 1992.

Westin, Alan F. Privacy and Freedom, New York: Atheneum. 1967.

Warren, Samuel and Louis Brandeis. “The Right to Privacy.” Harvard Law Review 4 (1890): 193–220.

[1] Cf. Brunson, 2015.

[2] Roessler, B., 2005, 8.

[3] For an extension of this discussion see [Gavinson. R., 1980] “An individual enjoys perfect privacy when he is completely inaccessible to others”. And Anita Allen’s definition 1988 “Personal privacy is a condition of inaccessibility of the person, his or her mental

[4] Rachels, J., 1975.

[5] Ibid, 192.

[6] Ibid, 293.

[7] Rachels. J., 1975, 293.

[8] Ibid, 293

[9] Ibid, 293

[10] Fried. C., 1984, 204.

[11] Ibid, 205.

[12] Ibid, 205.

[13] Ibid, 208.

[14] Ibid, 208.

[15] Ibid, 208.

[16] Fried. C., 1984, 208.

[17] Reiman, J., 2004.

[18] Cf. Scalambrino, F., 2015.

[19] Cf. Scalambrino, F., 2014.

[20] Cf. Radder, H., 2015.




Author Information: Frank Scalambrino, University of Akron,

Scalambrino, Frank. “Employees as Sims? The Conflict Between Dignity and Efficiency.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 35-47.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Aaron Parecki, via flickr

“… that which mediates my life for me, also mediates the existence of other people for me.” —Karl Marx[1]

Today’s technological mediation allows for unprecedented amounts and depths of surveillance. Those who advocate for such surveillance tend to invoke a notion of public safety as justification. On the one hand, if acceptance of being surveilled follows a philosophy, it would seem to be a kind of “greatest good for the greatest number” philosophy. However, it may be the case that the philosophy functions as an after-the-fact excuse, and people are simply willing to accept surveillance so long as they are able to use their technological devices. On the other hand, it is interesting to note that with context shifts in which such a philosophy could no longer justify surveillance, a philosophy of ownership may be the only viable justification for such surveillance. Yet, insofar as we are discussing the freedom of individuals, e.g. “employees,” we should be critical regarding surveillance justified by a philosophy of ownership.

This article seeks to provide a critique of surveillance in situations where surveillance thrives despite the tension between freedom and ownership. Specifically, this article examines the development of workplace surveillance—through technological mediation—from “loss prevention” to “profit protection.” The tension between freedom and ownership in this context may be philosophically characterized as the tension between dignity and efficiency. After describing an actual workplace situation in which a retailer uses technological mediation to surveil employees for the sake of “profit protection,” a critique of surveillance will emerge from a discussion of the notions of efficiency and dignity in relation to freedom. Rather than determine the justification of surveillance through technological mediation in terms of the “justified true belief” of “profit protection,” this article—from the perspective of social epistemology—takes for its point of departure a conception of knowledge in terms of the “social justification of belief” (Rorty, 1979: 170). Hence, the policy recommendations regarding technological mediation with which this article concludes may be understood as developed through social epistemology and a concern for freedom most often associated with existential philosophy.

Employees as Sims?

It is already the case that business owners may use their smartphones to access “real time” audio and video surveillance of their employees. This article considers a retail business with stores in more than one of the United States; speaking with individuals who have worked under such profit-driven surveillance is illuminating. The retail space in question was small enough to have audio and video surveillance covering the entire premises where employees and customers could interact. One employee described how “the boss” was “on a beach somewhere having a drink” watching the employee in question work. The “boss” would then periodically call the business to have the “middle management” ask this employee why he was doing whatever it was he was doing. The employee described the experience as “stressful.” Further, he described feeling “paranoid,” at times, not knowing for certain how closely he was being surveilled from moment to moment.

The idea of using technology to surveil a workplace is not new. However, the kinds of technology available today allow for unprecedented levels of surveillance. Whereas less technologically-mediated work environments could have justified surveillance in terms of employee safety and loss prevention, e.g. theft and accidental destruction, today’s technologically-mediated workplace allows for greater depths of “micro-managing” through surveillance. What we will see is that despite any negative connotation associated with the notion of “micro-managing,” when understood along a spectrum of “loss prevention” and in conjunction with the technological mediation which allows for it, the use of surveillance for the purpose of micro-managing employees can seem as justifiable as locking the door when you close shop for the night.

Originally the idea of “loss prevention” included concerns to monitor for theft. If setting up video surveillance will deter theft or help you recover lost property after theft, then the calculation seems straightforward enough that the video surveillance of your business is a good investment. Further, if video surveillance helps defend business owners against unwarranted worker compensation claims by employees who were hurt on the job through no fault of the business, then again the calculation seems straightforward enough that the video surveillance of your business is a good investment. In fact, retail businesses often employ an entire “loss prevention” department tasked not only with monitoring video surveillance of the business’s premises but also often to appear as customers among the customers to assure shop-lifters are quickly captured and restrained. From the perspective of a philosophy of ownership, the idea is that you own property which you are offering to sell to others, and if others attempt to take your property without compensating you as you deem appropriate, then it seems straightforward enough that your rights regarding your property have been violated.

Now, the idea of “profit protection” may be understood as an extension of “loss prevention.” Moreover, it should be kept in mind that such “profit protection” would not be possible without today’s technological mediation. “Profit protection” is supposed to refer to the reduction of the preventable loss of profit, and “the preventable loss of profit” refers to actions performed inadvertently or deliberately. Thus, notice how surveillance for the sake of “profit protection” may technically extend beyond theft and accidental destruction of property. In other words, if employees are not performing their job duties in a way that allows for the sale of your property, then the profit which you could have reasonably earned through their labor is lost.

There are a number of ways technological mediation allows for “profit protecting” surveillance. First, just like the popular smartphone applications which allow individuals to monitor their property while away from their homes or apartments, business owners may not only monitor their property but also the individuals tasked with facilitating the sale of their property. Second, a business owner could easily isolate which employees are not performing as efficiently as they should by simply tracking sales. If given the reasonable amount of expected sale, whether determined by season and time of day or by the ratio of sales to customer traffic, etc. business owners can determine when their property is not being sold as efficiently as it should be. Lastly, then, business owners may use technology to surveil those particular employees who are working during the times when business operations are not as efficient as they should be. In doing so, business owners could learn what these employees are doing “wrong.”

Notice, if such surveillance is framed as a “teaching opportunity,” then an employer could construe the whole surveillance operation as benevolent and caring, without even needing to mention “profit protection.” However, to whatever extent there would be a calculation involved to justify the use of management time to surveil such employees, then the notion of “profit protection” could be easily revealed as operable, despite denial on the part of the business. In either case, notice how the surveillance of such employees seems to justify such “micro-managing” as questioning sales techniques, and such a technologically-mediated relation to the employee would extend all the way to monitoring what employees say and how they say it. After all, even an employee’s relation to customers, if understood in terms of cybernetics[2] (cf. Scalambrino, 2014 & 2015b) may be quantified in terms of variables which correlate with successful sales. Thus, a business owner may be seen protecting profit by micro-managing the facial expressions, tone of voice, and suggestions made by their employees.

On the one hand, if all this is beginning to sound as if technologically-mediated business may make employee management and relations into a kind of video game (such as, for example, “the Sims”), then you are following the argument of this article.[3] On the other hand, there are three points to keep in mind. First, it would be too cumbersome to conduct such management and relations to employees, as if they were Sims, without technological mediation. Second, notice how framing the micro-management associated with such surveillance in terms of “profit protection” makes the enterprise sound like good (cybernetic) science and a wise business investment. Third, we will consider the question: How does such surveillance and micro-managing affect employees and relate to the constitution of their employee-identity? As we will see, whereas the second point may be rightfully characterized in terms of the efficiency of an employee in regard to the performance of assigned tasks, the third, which we will characterize in terms of the “dignity of the person” who is the employee, is not a simple question to answer. Moreover, as we shall see, the efficiency made possible by technological mediation seems to have tipped the balance in favor of efficiency over dignity.

The Conflict Between Efficiency and Dignity

There are a number of ways to articulate the conflict between efficiency[4] and dignity, and in doing so a distinction may be made between the rationale and the value[5] of such micro-managing and surveillance of employees through technological mediation. Privileging efficiency, it may be argued that the feelings and self-identity of an employee need not be included in the concerns of a reasonable business owner. In this way, it may be said that business owner’s need not include concerns for employee feelings and self-identity in their rationale for implementing various surveillance and management practices. Yet, insofar as employee feelings and self-identity have value which can be correlated with profit, then it becomes an issue of efficiency to control these variables as much as possible. That is to say, a cost/benefit analysis may be called for in which the impact of such variables on profit could be determined.

Considering profit necessary to sustain a business, a cost/benefit analysis of the appropriate relation to employee dignity can be quite complicated. For the purposes of this article, consider the following possibilities. The value of privileging dignity may run directly counter to “profit protection.” That is to say, venturing into the dimension of surveilling employees to promote various dignity-related psychological features may seem counter-intuitive, not only because a certain amount of disgruntlement may be constitutionally the norm for some individuals but also because it may be difficult to control the cost of sustaining such a workplace environment. Further, it is not immediately clear whether surveilling, micro-managing, and subsequently firing an employee for their inability to sustain a profit margin may not be in the best interest of the dignity of the employee. Whereas it may be more consistent with “profit protection” to screen potential employees for job aptitude, rather than hire individuals and subsequently surveil them for aptitude, to determine for an individual that they are not good at performing a task may be seen as providing helpful guidance consistent with respecting their dignity.

The “helpful guidance” framing of firing an employee is reminiscent of the “teaching opportunity” framing of surveillance and micro-management. In other words, though it may seem intuitively beneficial for an employer to appear to its employees as concerned with employee dignity in its various rationales for investing in surveillance and micro-managing, again it seems concern for profit would be the ultimate determining factor in whether the costs associated with maintaining such an appearance to its employees constitutes a good investment for the business. Moreover, on the one hand, it could be construed as a kind of alternative compensation, so business owners could justify keeping larger amounts of profit, e.g. “At our workplace managers will work with you to ensure you love your job.” On the other hand, establishing a workplace in which it is a requirement of employment that employees appear happy at all times may be considered unreasonably oppressive.

Hence, it seems even if a business were to remain neutral in expressing rationale for its actions regarding dignity, there may be a spectrum along which businesses cannot help but be placed regarding how they value employee dignity. On the end of the spectrum privileging efficiency would be located automatons, resulting from analyses and established through an investment in future profit; on the end of the spectrum privileging dignity would be autonomous persons, perhaps involved in a “profit-sharing” business.

Autonomy and Self-Awareness: The Scope of Simulation

There are three (3) distinctions which are now classic in the history of Western philosophy, which will help articulate the conflict between efficiency and dignity. These distinctions come from Immanuel Kant’s (1724-1804) ethics. The three distinctions are: the “three natural pre-dispositions to the good,” the “principle of ends” (as the second formulation of Kant’s famous Categorical Imperative), and the difference between “a person of good morals” and “a morally good person.”[6]

Building on Aristotle’s divisions of the soul, Kant distinguishes between the “animal,” “human,” and “personal” dimensions. Each of these dimensions has a corresponding type of “self-love,” which individuals use to determine self-worth. At the level of animality, self-love is “mechanical” and determined by physical pleasure. Individuals centered on this level determine the value of their existence by how much physical pleasure they experience in life. At the level of humanity, self-love is “comparative.” This is due to the fact that rationality cannot help but determine ratios. Individuals centered on this level determine the value of their existence by comparing aspects of their lives to the lives of others.

Finally, at the level of personality, according to Kant, the “predisposition to personality is the capacity for respect for the moral law as in itself a sufficient incentive of the will.” (Kant, 1960: 34). Thus fully actualized individuals determine their self-worth as “a rational and at the same time an accountable being” (Ibid), and the difference most relevant for our discussion is the sense in which a person has self-respect beyond the natural human tendency to compare oneself with others. In other words, though someone has more money or better possessions than you (cf. Epictetus, 1998: §6), you may value yourself in terms of your disciplined harmony with right living. Insofar as “right living” is meaningful, then its truth and reality precedes an individual’s acceptance of it. That is to say, it is true that touching the hot stovetop will hurt you, prior to your touching it and independent of your beliefs regarding it.

Hence, there are two conclusions to be drawn here. First, “dignity of the person” is meaningful, whether the self-respect associated with it is actualized by individuals or not. Second, “dignity” refers to the self-actualization which corresponds (as we will see more completely in a moment) with the highest natural capacity for living in humans. That is to say, individuals who have not actualized the personal dimension, and thereby self-respect, are individuals who are not living the most excellent life available to humans.

Two brief references to other philosophers may be helpful here for clarification. In regard to the second point, Friedrich Nietzsche’s (1844-1900) statement, “the seal of liberty” is “no longer being ashamed in front of yourself” (1974: 220) need not be understood as a philosophy of “anything goes,” but rather may be understood as indicating liberation from a life of self-shaming in regard to a comparison with the rest of humanity. Further, the first point, above, invokes a classic passage in Plato’s Republic where Socrates notes that rulers (i.e. employers and bosses) “in the precise sense” are people who “care for others” (Plato, 1997: 340d). This is, of course, juxtaposed with the definition of justice offered by Thrasymachus, namely, that “Rulers make laws to their own advantage.” (Ibid: 338c).

The next distinction from Kant is his “principle of ends.” This is the second formulation of his famous “Categorical Imperative,” and it suggests you should act in such a way “that you use humanity, whether in your own person or in the person of another, always at the same time as an end, never merely as a means.” (Kant, 2002: 38). On the one hand, notice how this suggests we should not use others as a means to determine our own self-worth.  On the other hand, it also points to the dignity of persons as ends in themselves. That is to say, the principle of ends suggests a person should not use others in such a way that it is merely for utility. As we will see, for Kant this goes beyond J.S. Mill’s “principle of liberty”[7] in that to treat another person—even a consenting person—merely as a means, and thereby not as a self-respecting person, may be construed as a kind of harm to their person insofar as their ability to self-actualize their personhood is conditioned by their capacity for self-respect.

The final distinction from Kant, then, is the one between “a person of good morals” and “a morally good person” (cf. Scalambrino, 2016c). What is fascinating about this distinction is that it is not in terms of the actual action that the different types of individuals perform. Both persons may perform the same action; however, the latter type of person is motived in terms of the self-respect of personhood, and the former is motived in terms of a different pre-disposition to goodness. Notice that because all of the pre-dispositions are “to the good,” it is not in terms of the goodness of the action that its performance should be evaluated. Rather, it is the motivation that determines which performance of the action is better. This will be important for the thesis of this article, as there is no attempt being made to suggest that profit is “not good.”

To synthesize these distinctions from Kant, notice he believes the “morally good person” is freer and is existentially-situated better than the “person of good morals.” Further, he thinks the “morally good person” is living a more excellent life than the “person of good morals,” and all of this is despite the fact that both individuals may be performing the same actions. How is this the case?

Because the three pre-dispositions to the good constitute a hierarchy, in order for an individual to actualize the highest capacity, i.e. for personhood, the existentially-prior capacities must first be actualized.[8] This means “personhood” is a higher excellence than mere “humanity,” and personhood is existentially-situated in a better way, therefore, since the person has a wider horizon of evaluation available to it than in terms of mere humanity. For example, even if someone merely at the level of humanity were hoping for the best means to manipulate others, having a wider horizon of evaluation would provide a wider range of potential justifications, i.e. this may be seen in the attempt to suggest that profit-driven surveillance is somehow for the benefit of the surveilled—when the motivation determining the performance of the action is clearly “profit protection.”

In order to understand how the “morally good person” also lives the better life, a brief reference to Aristotle’ Nicomachean Ethics may be helpful. As Aristotle goes through the various types of life in his search to discover the best life for humans, he notes, “The life of money-making is one undertaken under compulsion, and wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else.” (2009: 1096a5). The idea here is that to ask regarding the natural purpose of human life is to ask what human life is in itself, i.e. as an end for itself and not as a means to be expended for something else. This points directly to the synthesis of Kant’s distinctions as a justifying how the “morally good person” lives the better, i.e. the most excellent life available to humans, in that the natural presence and hierarchical order of the dispositions suggests that life was made to fully actualize itself.[9] To be fully-actualized means to actualize the highest pre-disposition, which is the predisposition in which life treats itself as an end in itself, whether in its own person or in that of another, and thereby constitutes the dignity of personhood thru its self-respect.[10]

Lastly, notice how the above explication of Kant’s ethics regarding the dignity of personhood may be characterized in terms of “self-awareness” and “autonomy.” Because the individual who has actualized the capacity for personhood may relate to itself in terms of a greater number of dimensions than the “person of good morals” who is not performing actions with the full[11] actualization of their self. In this way, the “morally good person,” in expressing the self-respect associated with the dignity of personhood, is more self-aware. Were this in terms of content, then it would be as if age should determine greatest amount of self-awareness; however, this is in terms of capacity, not content. In a similar way, Kant characterizes the autonomy of an individual, not in terms of content but rather, in terms of relation (cf. Scalambrino, 2016b).

Thus, it is the “autonomy” of the fully actualized person which makes them freer. According to Kant, the “principle of autonomy” is “The principle of every human will as a will giving universal law through all its maxims [i.e. its code of conduct].” (Kant, 2002: 40). Notice, because both the “person of good morals” and the “morally good person” perform the same action, it may be said that they are following the same “law.” However, it is not the following of the law but the relation to the law when following it that differentiates these two types of individuals. In other words, because the “morally good person” understands its self-worth in terms of its accountability to the Natural Moral Law, it is motivated in terms of self-respect exemplary of the dignity of personhood. In this way, this type of person is freely choosing to follow the law. Because other types of individuals have motivations other than the accountability determining personal dignity, their decisions to follow the law are compelled by other motivations. The motivation to follow the law for its own sake is not an additional motive from the motive made possible through the actualization of personhood.

Efficiency and Dignity

In what way does the above section illustrate “the limits of simulation,” and how do the limits of simulation relate to the conflict between efficiency and dignity? Again, it is, of course, technological mediation that conditions the whole problem under discussion. In other words, it is the amount and depth of surveillance made possible today by technological mediation which has allowed for the shift from “loss prevention” to “profit protection.”

On the one hand, the above section helps illustrate that though loss prevention and profit protection may be good, the surveillance of employees for their sake is founded upon a relation in terms of “humanity,” at best, and not “persons.” In other words, it seems to neither treat employees with dignity nor to provide an environment which may help them fully actualize self-respect as an employee. Like “persons of good morals” in Kant, employees under surveillance may perform the right action and the same action that an employee with dignity and self-respect may perform; however, also like “persons of good morals,” employees under surveillance may lack the best motivation to perform their work “duties.”

On the other hand, it is autonomy and self-awareness that limit the scope of possible simulation. What this ultimately means is that if the goal is efficiency, then approaching it through technological mediation, as if to make employees simulations of the desires and knowledge of their employers, may only lead to short-term capped-amounts of efficiency. In other words, it seems consistent with the above Kantian discussion of self-actualization to note that employees who respect themselves as persons who do the kind of work they are employed to do should make for the best employees. That is, long-term efficiency seems predicated upon autonomous employees who are self-aware for their own sake. Simulation is ultimately limited by the lack of autonomy and self-awareness associated with employees motivated at Kant’s level of “humanity,” and even when performing the correct actions, it is as if they do so like “persons of good morals,” not “morally good persons.”

For those who advocate for efficiency, even at the cost of dignity, the above discussion suggests promoting dignity might be a better way to promote efficiency. One, it is inefficient to “micro manage” employees. Two, even with the use of cybernetics and technological mediation to help indicate where such “micro-management” may increase efficiency, such practices may work against efficiency to the extent that they undermine employee dignity. As the above discussion suggests, employee dignity indicates more self-actualization, i.e. a freer and better existentially-situated employee. In this way, though it may be true that if an employee will not be subjected to conditions of technological mediation, perhaps a replacement who would will be easy to find. However, the ease at which individuals with less self-respect and dignity, or with greater compelling conditions, may be found neither resolves the conflict between efficiency and dignity nor does it ensure efficiency.

Excursus: Control & Inauthenticity: Simulation, “Legacy Protection,” and Despair

Some readers of our edited volume Social Epistemology & Technology: Toward Public Self-Awareness Regarding Technological Mediation have recognized, at least, an analogy between society and families in regard to the control for which technological mediation allows. Though we cannot work out every detail here, we can provide a sufficient sketch of the analogy to, if nothing else, provoke deeper thinking and self-awareness regarding the potential effects of technological mediation. In general, this question relates to the chapters located in the second half of Social Epistemology & Technology, and specifically in regard to my chapter “The Vanishing Subject: Becoming Who You Cybernetically Are.” Of particular interest regarding this topic may be the section of that chapter titled “Pro-Techno-Creation: Stepford Children of a Brave New Society (?),” though if read in isolation from the rest of the chapter, that section may seem obscure. Since my second article in this SERRC Special Issue will be devoted to discussing the theme to which the second part of Social Epistemology & Technology was devoted, i.e. the theme of “changing conceptions of humans and humanity,” we will not engage such a discussion in this excursus (cf. Scalambrino, 2015b & 2015c).

In regard to the analogy, “profit protection” is to the use of technological mediation in business as “legacy protection” is to the use of technological mediation in the family. The basic idea is that: just as technological mediation may be used to control employee actions, technological mediation may be used to constitute select attributes of a child (e.g. IVF, PGD, CRISPR-Cas9, etc.) and to promote and sustain a select identity for the child. The motivation may be characterized as “legacy protection,” since the ends afforded by technological mediation constitute a kind of investment made by parents. In this way, the dynamics of the problem we uncovered above concerning employees, employer desires, and technological mediation, manifest analogously in regard to the family. That is to say, the question of the employee’s existential-freedom becomes the question of the child’s existential-freedom, and the dilemma regarding whether to risk losing profit to allow for the individual’s autonomy and increased self-awareness becomes the risk of losing one’s legacy and “investment” in their children.

Given the large cost associated with what amounts to genetically engineering one’s children, it is clear that parents have some goal(s) in mind when selecting various attributes for a child (cf. Marcel, 1962). Whether this initial investment is made or not, some see it as the technologically-mediated equivalent of mate selection; however, notice, whether equivalent or not, the level of control increases significantly thru technological mediation. Beyond the birth of the child, then, there is the question of how to sustain the initial investment made—whether through mate selection or genetic engineering—to ensure “legacy protection.” The idea here is that whatever goal(s) parents have in mind when selecting, perhaps as best they can, various attributes for a child, those goals point to the legacy the parents are attempting to protect.

As the technological mediation of a child’s life increases so too does the potential to surveil and control the child. Since the idea of increasing surveillance should be obvious (e.g. checking to see what websites they view, what they text to friends, GPS of where they go, and so on.) we will focus only on the control piece here. Control is understood here in the sense of limiting the full self-actualization associated with personhood above and discussed through the philosophy of Immanuel Kant. That is to say, if you are able to limit an individual’s self-actualization to the level of “humanity,” then they will continually constitute their identity through comparison with others. Just as I indicated in my second chapter of Social Epistemology & Technology, the way to “lock down” such self-awareness is by “misunderstanding nothing.” What this means is that if you can provide an individual with a worldview that seems to provide an account for everything in terms of that individual’s comparative self-worth to others, then you control that individual’s ability to interpret their own existence.

When this can be anchored through a talent in which the individual excels, then the comparative model may be all the more effective, since the individual seems themselves as “winning” or a “winner” based on an identity which takes itself as able to account for whatever happens in life. The problem, Kant would say, is that the individual is not fully autonomous. The “law” given to them is not of their own choosing. There are a number of ways to use technological mediation to control individuals, and thereby to ensure “legacy protection.” On the one hand, a discussion of inauthenticity and memes would be appropriate here, since it becomes possible to understand the whole enterprise for “legacy protection” as founded upon the comparative understanding; thus, the agency more commonly attributed to the parental desire ensure legacy protection may be attributed to the transmission of the comparative worldview itself from generation to generation—like the transmission of thought memes—in that the parent evidently operates with the same worldview which is successfully engineered into the child should likewise promote that child’s desire to pass on the same worldview that values “legacy protection” to their children, and so on.

In this way, cybernetic theories of human existence function as a kind support for holding individuals at the human level in which self-worth is determined through comparison and self-awareness and autonomy are thereby diminished. What the phrase “cybernetic theories of human existence” refers to is precisely any theory of existence which believes all of existence can be explained. The sense in which such “epistemic closure” misunderstands nothing suggests to the individual’s inhabited by it that it is a worldview that can provide them with the truth in regard to everything (cf. Scalambrino, 2012). “Existentialists,” resist such systemization because it treats life like “a problem to be solved,” rather than (as Kierkegaard phrased it) “a mystery to be lived.” It is worth noting that Kierkegaard characterized such an inauthentic relation to life as “despair” (cf. Scalambrino, 2016b).

Some of the memes that are easy to notice are phrases such as “a gap year.” When an individual looks at the time of existence as though it is merely fulfilling a pre-established form, like a “cookie cutter,” then we should ask: How did that form get there? Notice how the perfect example here would be to invoke the self-understanding of individuals in “third world” locations, and ask what a “gap year” is for them. The idea is not that “gap year” has no reference. Rather, the idea is that individuals who truly believe that their lives are, and should be, following a pre-established pattern are individuals who are neither fully autonomous nor fully self-aware (cf. Marcuse, 1991). Of course, proponents of “legacy protection” may suggest that insofar as the individual in question is not from a “third world” location, then understanding the time of one’s existence in terms of “gap years, etc.” is a privilege to be coveted. Why is it a privilege to be coveted? Perhaps because such a self-understanding is more efficient for the individual to live (and pass on) the privileged existence which is their legacy.

Beyond any technological mediation used to genetically engineer a child, technological mediation helps hold individuals at the human level in which self-worth is determined through comparison by helping to sustain an identity, however explicit it may be to the individual, anchored in a cybernetic worldview. Technological mediation does this in all the ways philosophers have been saying it does this since at least when Plato talked about the technē of “writing” and its effects on human self-understanding. Yet, more to the point, when Heidegger and Jünger discuss the “form” in which humans understand themselves as “standing reserve” or as “workers,” then we can see the insidious influence of technological mediation as twofold. First, the efficiency allowed for by technology becomes an expectation. For example, the expectation is common today that we should have all our email accounts consolidated in an app on a smartphone, so you can receive emails with a level of efficiency as if they were all text messages, etc. Second, the idea that you may have some self-understanding other than legacy “protector” or germ-line “curator” is really just the folly of an inefficient employee or the noise of malfunction in a cybernetic human machine.


Aristotle. Nicomachean Ethics. Translated by Roger Crisp. Oxford: Oxford University Press, 2009.

Ashby, William Ross. An Introduction to Cybernetics. London: FIliquarian Legacy Publishing, 2012.

Ellul, Jacques. The Technological Society. Translated by J. Wilkinson. New York: Vintage Books, 1964.

Epictetus. Encheiridion. Translated by Wallace I. Matson. In Classics of Philosophy, Vol I, edited by L. P. Pojman. Oxford: Oxford University Press, 1988.

Fuller, Steve. “The Place of Value in a World of Information: Prolegomena to Any Marx 2.0.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 15-26. London: Rowman & Littlefield International, (2015).

Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row, 1962.

Heidegger, Martin. “The Question Concerning Technology”, In Basic Writings, edited by David F. Krell, 307-343. London: Harper & Row Perennials, 2008.

Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press. 2008.

Jünger, Ernst. “Technology as the Mobilization of the World Through the Gestalt of the Worker.” Translated by J. M. Vincent, revised by R.J. Kundell. In Philosophy and Technology: Readings in the Philosophical Problems of Technology, edited by Carl Mitchum and Robert Mackey, 269-89. New York: The Free Press, 1963/1983.

Kant, Immanuel. Groundwork of the Metaphysics of Morals. Translated by Mary J. Gregor and Jens Timmermann. Cambridge: Cambridge University Press, 2002.

Kant, Immanuel. Religion Within the Limits of Reason Alone. Translated by T.M. Greene and H.H. Hudson. New York: Harper & Row, 1960.

Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Translated by Brian Massumi. Minneapolis, MN: The University of Minnesota, 1984.

Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19: 27-38, 1962.

Marcuse, Herbert. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press, 1991.

Marx, Karl. “The Power of Money.” In Economic and Philosophic Manuscripts of 1844. Translated by M. Milligan, 136-141. New York: Dover Publications, 2007.

Nietzsche, Friedrich. The Cheerful Science. Translated by Walter Kaufmann. New York: Vintage Books, 1974.

Plato. Republic. Translated by G. M. A. Grube and Revised by C. D. C. Reeve. In Plato Complete Works, edited by John M. Cooper. Indianapolis, IN: Hackett Publishing, 1977.

Rorty, Richard. Philosophy and the Mirror of Nature. Princeton, NJ: Princeton University Press, 1979.

Scalambrino, Frank. Full Throttle Heart: Nietzsche, Beyond Either/Or. New Philadelphia, OH: The Eleusinian Press, 2015a.

Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt Publishing Company, 2016a.

Scalambrino, Frank. “The Shadow of the Sickness Unto Death.” In Breaking Bad and Philosophy, edited by Kevin S. Decker, David R. Koepsell and Robert Arp, 47-62. New York: Palgrave, 2016b.

Scalambrino, Frank. “Social Media and the Cybernetic Mediation of Interpersonal Relations.” In Philosophy of Technology: A Reader, edited by Frank Scalambrino 123-133. San Diego, CA: Cognella, 2014.

Scalambrino, Frank. “Tales of the Mighty Tautologists?” Social Epistemology Review and Reply Collective 2, no. 1 (2012): 83-97.

Scalambrino, Frank. “Toward Fluid Epistemic Agency: Differentiating the Terms ‘Being,’ ‘Subject,’ ‘Agent,’ ‘Person,’ and ‘Self’.” In Social Epistemology and Epistemic Agency, edited by Patrick Reider, 127-144. London: Roman & Littlefield International, 2016c.

Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015c.

Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.

[1] From Economic and Philosophical Manuscripts of 1844, translated by M. Milligan (1964).

[2] Cybernetics may be understood as a kind of science of life. For our purposes, it refers to a relation to life such that events in life are understood as capable of being fully quantified and subjected to calculations which would render the eventual outcomes predictable. Thus, proponents of such a relation to life tend to hold that the only limitation on the total cybernetic revelation of life is processing power in regard to the requisite quantification and calculation. Its continued relevance for conversations regarding technology and freedom is that if cybernetics is correct, then human freedom is a kind of illusion which results from the inability to calculate (what cybernetics considers to be) the fully deterministic nature of events. In short, according to cybernetics, it would be as if life were a machine with completely calculable motions (cf. Ashby, 2012; cf. Johnston, 2008; cf. Heidegger, 2008; cf. Wiener, 1965).

[3] For those unaware of the “Sims” reference, “The Sims is a video game series in which players “simulate life” by controlling various features of automatons and surveilling their activity. The video game was developed by “EA Maxis” and published by “Electronic Arts.”

[4] For a discussion of “efficiency” as indicative of the “Postmodern Condition,” see Lyotard, 1984.

[5] Cf. Fuller, 2015.

[6] I present the distinctions in this way for the sake of brevity and clarity; however, it should not escape Kant scholars that these three distinctions in essence represent a movement along Kant’s three different formulations of the Categorical Imperative, respectively, i.e. the principle of the law of nature, the principle of ends, and the principle of autonomy.

[7] Mill’s “Liberty Principle” suggests you are at liberty to act as you please so long as you are not harming others, i.e. so long as others consent to the treatment to which your actions subject them.

[8] Before even considering other reasons to justify this claim, notice the word “rational” in Kant’s articulation of the pre-disposition to personality.

[9] In Nietzsche’s language it is “to overcome itself.”

[10] This is, of course, why Kant thinks we naturally have a “duty” to be excellent.

[11] Cf. Scalambrino, 2015a.