Author Information:Frank Scalambrino, University of Akron, email@example.com
Scalambrino, Frank. “How Technology Influences Relations to Self and Others: Changing Conceptions of Humans and Humanity.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 30-37.
The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3uf
Please refer to:
- Special Issue 4: “Social Epistemology and Technology”, edited by Frank Scalambrino.
Image credit: Roman & Littlefield International
“Don’t be yourself, be a pizza. Everyone loves pizza.”—Pewdiepie
For the sake of easier and more efficient consumption, this article is written as a response to a series of six (6) questions. The questions are: (1) Why investigate “Changing conceptions of humans and humanity”? (2) What is “technologically-mediated identity”? (3) What are the ethical aspects involved in the technological-mediation of relations to one’s self and others? (4) What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity? (5) What is the philosophical issue with “psychoanalysis” an applied cybernetics? (6) What does it mean to say that social media eclipses reality?
§1 Why investigate “Changing conceptions of humans and humanity”?
There are two answers to this question. We’ll start with the easier one. First, the book series in which our book Social Epistemology & Technology appears avowedly takes the theme of “Changing conceptions of humans and humanity” as a concern it was established to address. Second, the Western Tradition in philosophy has concerned itself with this theme since at least the time of Plato. Briefly, recall Plato suggested the technology known as “writing” has adversely effected memory. On the one hand, this is a clear example of a technologically-mediated relation to self and others the effect of which alters humans and humanity. Plato thought the alteration was not for the better. Can you imagine what he’d say about “Grammarly”?
Because philosophy in the Western Tradition has concerned itself with this theme for such a long time, there are many philosophers who, with varying degrees of explicitness, address the theme. Two philosophers in particular, who were uniquely positioned in history to make predictions and observations, stand out in the Western Tradition. Those philosophers are Martin Heidegger (1889-1976) and Ernst Jünger (1895-1998). Separately, they both spoke of a world-wide change which they could philosophically see happening. And, importantly, the idea is that because we are in the midst of that change’s aftermath, many of us born into it, it may actually be more difficult for us to see than it was for them. All this goes to the second reason for embracing this important theme.
It is clear that technology has changed the way we relate to self and others. To be completely frank with you: I own and use an iPhone, I typed this document on a laptop, I own a PlayStation and have spent a good deal of energy playing video games in my time; I also listen to music on an iPod, have a LinkedIn account, FaceTime and Skype regularly, and have mindlessly watched YouTube and NetFlix for more hours than I can remember. I say all of this so readers will recognize that I am neither a Luddite nor a curmudgeon. Yet, in addition to all of the technology I use, I care about humanity; I also love philosophy, love thinking along with the philosophers, and have earned a Doctorate degree in philosophy. So, it is perhaps a duty to enunciate the Western Tradition themes and concerns publicly, as a responsible person with a PhD in philosophy; especially insofar as we may trace the very existence of many of the current problems in society today to the presence of various technologies.
Lastly, we should not be intimidated by the difficulty many first encounter when attempting to understand this perennial theme in philosophy. For example, despite its deep history, the winner of the “2015 World Technology Award in Ethics” explicitly related to this theme in our book (which was published in 2015) as “obscure,” outdated, or irrelevant for the 21st century. Though the word “World” is perhaps a misnomer, since award recipients are “judged by their own peers,” Dr. Shannon Vallor is in fact the current “President of the International Society for Philosophy and Technology (SPT),” and a tenured professor at Santa Clara University “in Silicon Valley.” Importantly, then, in her Notre Dame Philosophical Reviews (2015) review of our book, Dr. Vallor openly admitted her inability to understand the theme and its importance. On the one hand, she explains-away her difficulty noting, “Part of the difficulty is rooted in the book’s structure; for reasons that are never fully made clear by the editor, the chapters are sharply divided into two sections” (Vallor 2016). Because I was surprised to read this accusation (and almost certainly remembered explaining the reason for the book’s structure), I looked in the book. And, with all due respect to Dr. Vallor, on page one of the book she should have read:
As a volume in the Collective Studies in Knowledge and Society series, this book directly participates in three of the five activities targeted for the series. They are (I) Promoting philosophy as a vital, necessary public activity; (II) Analyzing the normative social dimensions of pursuing and organizing knowledge; and (III) Exploring changing conceptions of humans and humanity [emphasis added].Whereas both the content and the very existence of this book participate in the first of the targeted activities, the parts of the book are divided, respectively, across the other two activities. (Scalambrino 2015a: 1).
Thus, by the time she got around to calling my contributions to the “Changing conceptions of humans and humanity” section of our book “obscure ruminations,” I realized her “hardball” rhetoric was a substitute for actually engaging the material. For the record, however, I’m not criticizing playing “hardball.” I admire her spirit, and have no issue with “playing hardball.” Therefore, for the reasons noted above, and because even the “President of the International Society for Philosophy and Technology” found this perennial theme in the history of philosophy to be difficult, I hope this article will go toward providing clarity regarding this important theme.
§2 What is “technologically-mediated identity”?
The two most relevant ways to illustrate the notion of “technologically-mediated identity” are in terms of “existential constraints on identity” and “socially-constructed technological constraints and influences.” The basic idea here is that technology allows one to be as fully inauthentic as possible. To begin, there are clearly “existential constraints on identity.” The easiest way to understand this is to think about history. If you were born after the paperclip was invented, then it is not possible for you to invent the paperclip. The fact of your existence when and where it occurs constrains you from inventing the paperclip. And here, “inventor of the paperclip” is understood as a kind of identity. In other words, it is a statement made about someone’s identity, and it can be true.
Now, when we consider “technologically-mediated identity” the idea is twofold. First, the presence of technology makes various identities possible that would not be possible otherwise. Second, even if the presence of technology were only technologically-altering previously available identities, two issues would immediately manifest. On the one hand, technologically-mediated identities may require humans to be technologically-mediated or enhanced to sustain the identity. On the other hand, because the identities depend on technology for their presence in the world, they may, in fact, be anti-human human-identities. In other words, though they are identities which humans can pursue through the mediation of technology, the pursuit of such identities may be detrimental to the humans who pursue them.
The illusory nature of social media has already been well documented. Our book Social Epistemology & Technology has a large pool of references to peruse, for anyone who is interested. Often the content of a person’s social media is referred to as a “highlight reel” in that it misrepresents the reality of the person’s actual existence. This, in itself, is no surprise. However, the effects of social media, despite common knowledge of its illusory nature, are also well documented. These range from the depression, jealousy, and anxiety experienced by those who frequently spend time on social media to the many types of infidelity involving social media and the now essentially common association of social media with relationship infidelity. One of the ways to characterize what is happening is in terms of “technologically-mediated identity.”
In other words, social media—as a technology that allows one to use it to mediate relations to others—motivates viewers by presenting illusions. This can be seen in the presentation of identities which are illusory by being “highlight reels” or by simply allowing for greater amounts of deception. The operable distinction here would be analogous to the one between lying and lying by omission. Most certainly some people intentionally misrepresent themselves on social media; however, insofar as social media is by nature a kind of “highlight reel,” then it is like a lie by omission. This illustrates the notion of “technologically-mediated identity,” then, in that social media, as a kind of technological mediation, allows for the presentation of illusory identities. These identities, of course, motivate in multiple ways. Yet, just as they cannot portray the substance of an actual human existence, it is as if they entice viewers to adopt impossible identities.
Thus, the issue is not, and should not be presented as, between technologically-mediated identity and “natural” identity. Too many rhetorical options arise regarding the word “natural” to keep the water from muddying. Rather, the issue should be framed in terms of “inauthenticity” and the actual impossibility of recreating a “highlight reel” existence which does not include the technologically-suppressed “non-highlight reel” aspects of human life. This, of course, does not stop humans from pursuing technologically-mediated identities. What “inauthenticity” means philosophically here is that the pursuit of illusory or impossible identities is tantamount to suppressing the actual potentials (as opposed to virtual potentials) of which one’s existence can actualize. This can be understood as de-personalizing and even de-humanizing individuals who insert their selves into the matrix of virtual potentialities, thereby putting their actual potentials in the service of actualizing an identity impossible to actualize. For a full philosophical discussion of the de-personalizing and de-humanizing effects of technological mediation, see our book Social Epistemology & Technology.
§3 What are the ethical aspects involved in the technological-mediation of relations to one’s self and others?
The idea here is quite straightforward. Humans form habits, and the force of habit influences the quality of human experiences and future choices. Because the use of technology is neither immediately life-sustaining nor immediately expressive of a human function, technological mediation can be a part of habits that are formed; however, technological mediation is not an original force which can be shaped by habit for the sake of human excellence. Technological mediation can shape and constitute the relation between an original force, e.g. attraction, hunger, or empathy, and that to which the original force relates, yet in doing so, its relation to the original force can only be parasitic. That is to say, it cannot uproot the original force without eradicating what the thing is to which the original force belongs. For instance, we are not talking about using a pacemaker to keep someone’s heart pumping, we are talking about making it so a human would no longer need a beating heart. Such a technological alteration would raise questions such as: At what point is this no longer a human life?
It is by concealing the fact that technological mediation is not an original force that researchers in the service of profit-driven technologies can attempt to articulate technological mediation as virtuous, i.e. capable of constituting human excellence. Thus, Dr. Vallor speaks of the “commercial potential of science and technology” (Vallor 2015). Yet, those who articulate their guiding question as, for example, Dr. Vallor has “What does it mean to have a good life online?” clearly put the cart in front of the horse. Life is not “online,” and the term “life” in the phrase “life online” is necessarily a metaphor. However, Dr. Vallor, and those who follow her in committing the fallacy of “misplaced concreteness,” overlook two very important features of ethics and the “good life.” One, life is more primary than the internet, so at best the internet is in the service of life. Two, the “good life” includes criteria which the use of technology can directly undermine.
In order for an actual human to thrive in an actual human life, according to the philosophers of human excellence (e.g. the character ethics of Epicurus, Pyrrho, Aristotle, and Epictetus to name a few), the human would need to “unplug” and excel at being human, not at clicking on a keyboard or a touch screen or accumulating “likes” “followers” “friends” or “tweets.” As Nietzsche might have put it, were he here today: no matter how popular and rich Stephen Hawking (no disrespect intended) may be, his life does not exemplify human thriving. In fact, even Facebook no longer claims all those people are actually your “Friends.” As research continues to show that it is actually humanly impossible (cf. “Dunbar’s number”) to have as many “friends” as the illusory number social media allows users to flaunt. Thus, again, Dr. Vallor misplaces the ethical notion of “flourishing” when she speaks of “Flourishing on Facebook: virtue friendship & new social media” (Vallor 2012).
It is, of course, rather the case that the business of social media thrives by parasitically profiting from primal human forces by providing a platform for their virtual gratification. The best example that comes to mind is the manner in which Facebook initially allowed users to post pictures ostensibly as a benevolent platform for photo sharing. Eventually, however, Facebook claimed the right to use your pictures (since they are technically posted on the Facebook site) for the sake of advertising to your “friends.” The idea here is that the primal human forces of envy and jealousy are much easier to mobilize for the sake of sales if you can show a person what their friends have that they do not.
Therefore, the ethical aspects involved in the technological mediation of relations to one’s self and others indicate that human thriving belongs to human life, not the energy of life channeled through a virtual dimension for the sake of profit. To be sure the “commercial potential of science and technology” (Vallor 2015) is immense; however, the excellent actualization of “commercial potential” is not the excellent actualization of “human potential” which has always characterized “human thriving” according to philosophers in the Western Tradition (cf. Scalambrino 2016). Critics will be tempted to interject the idea of all the potential good money can do for human living conditions. Yet, that is not the topic under discussion; rather, the topic under discussion is the ethics constituting human excellence (“thriving”) regarding self and others.
§4 What is the philosophical issue with “cybernetics”/Why is it considered bad for humanity?
For a more in-depth discussion of this issue see our Social Epistemology & Technology, especially in regard to Martin Heidegger’s discussion of cybernetics. Here are the same scholarly sources I referenced in our book for the sake of explicating the meaning of “cybernetics.”
The idea readers should have in mind is that cybernetics aims at “control.” Before corporations figured out how to use the “Cave Allegory” against academia itself, i.e. by funding and disseminating research the results of which would benefit the corporations and using pseudo-awards and marketing tactics to drown out the research of less well-funded scholars, philosophers of technology seemed unanimously to warn that by increasingly mediating our relation to our lives and environment with technology we would be increasingly losing freedom and placing ourselves under “control.” What was their philosophical justification for such a claim? It boils down to “cybernetics.” In today’s terms we might say, as soon as everything is “connected” you functionally become one of the things in the “internet of things.” Habitually functioning in such a way is not only inauthentic, it is also a kind of self-alienation, quite possibly to the points of de-personalization and even de-humanization (cf. Scalambrino 2015b).
Here are the quotations from our book: First,
Historically, cybernetics originated in a synthesis of control theory and statistical information theory in the aftermath of the Second World War, its primary objective being to understand intelligent behavior in both animals and machines” (Johnston 2008, 25-6; cf. Dechert 1966; cf. Jonas 1953).
According to Norbert Wiener’s Cybernetics; or, Control and Communication in the Animal and Machine (1948), “the newer study of automata, whether in the metal or in the flesh, is a branch of communication engineering,” and this involves a “quantity of information and [a] coding technique” (Wiener 1948, 42). Next,
Essentially, cybernetics proposed not only a new conceptualization of the machine in terms of information theory and dynamical systems theory but also an understanding of ‘life,’ or living organisms, as a more complex instance of this conceptualization rather than as a different order of being or ontology [emphasis added] (Johnston 2008, 31).
Again, in An Introduction to Cybernetics, the “unpredictable behavior of an insect that lives in and about a shallow pond, hopping to and fro among water, bank, and pebble, can illustrate a machine in which the state transitions correspond to” a probability-based pattern that can be analyzed statistically (Johnston 2008, 31). In this way, “‘cybernetic’ may refer to a technological understanding of the mechanisms guiding machines and living organisms” (Scalambrino 2015b, 107). The philosophical issue with “cybernetics,” then, (put simply) is that, on the one hand, it seeks to reduce human life to mechanical function, and, on the other hand, it can be exploited to functionally control humans.
§5 What is the philosophical issue with “psychoanalysis” as an applied cybernetics?
Kierkegaard famously said, “Life is not a problem to be solved, but a mystery to be lived.” The basic problem with psychoanalysis as an applied cybernetics, then, is that it treats life like a problem to be solved; however, even more than that, its cybernetic view of human life makes life appear as if it is something that can be controlled. In this way, despite its belief in “the unconscious,” psychoanalysis treats human life as if it were a machine. Thus, the idea that “unconscious influences,” traceable to childhood events, determine our actions undermines our confidence in our own free will.
Adopting the cybernetic view of human nature advocated through psychoanalysis functionalizes the mystery of life into “the unconscious.” It is supposed to be the case that the mysterious, as “unconscious,” can be understood. In this way, the unconscious influences contributing to, or perhaps even constituting, one’s “problem” can be revealed, and the revelation of these unconscious influences thereby “solves” the problem. However, as multiple existentialists, including Gabriel Marcel, have pointed out, the “functionalization” of the human being de-personalizes. If the human person is constituted through its choices and its respect for itself as the one who makes those choices, then a psychoanalytic cybernetic view of the human undermines a person’s self-realization. It, of course, does this by suggesting to persons that the freedom of their choosing is a type of illusion.
Finally, psychoanalytic techniques, which Freud developed from hypnotic trance induction, exploit a cybernetic “control theory.” The person receiving psychoanalytic “treatment,” traditionally known as the “analysand,” is initially and immediately placed in, what has traditionally been called, a “one down” position. This means the analysand is supposed to assume that the analyst has access to, whether it be in terms of knowledge or awareness, the unconscious of the analysand, and since the analysand is not supposed to understand the unconscious, this means the analysand is in a “lower” or “one down” position in relation to the analyst from the very inception of analysis. The induction of this “one down” position initiates the cybernetic mechanism of control over the analysand. It is as if, the very belief that psychoanalysis can “solve” the problems of one’s life is itself the “analysand’s” transference of control over to the person of the “analyst.” Thus, the cybernetic view of human nature advocated through psychoanalysis functionalizes human life, and by persuading an “analysand” to hand over their freedom—in the form of the belief in one’s own autonomous power of choice—by allowing for the very control that psychoanalytic theory claims to have power over also provides, what seems to be, evidence of its own confirmation.
§6 What does it mean to say that social media eclipses reality?
The idea at work here refers back to sections two (2) and three (3) of this article. Simply put, the idea is that technological mediation allows for humans to alter their relations to self and others. However, what researchers often call “interface” issues condition possible illusory understandings of self and others. Popularly, and in the earlier sections, this was invoked by describing the content found on social media as a “highlight reel.” Because humans can develop goals and regulate behaviors in terms of the interface issues of social media, it becomes appropriate to characterize one’s relation to reality as “eclipsed.”
Take, for example, the “highlight reel” aspect of social media discussed above and in our book Social Epistemology & Technology. Of course, this is just one of the interface issues which can “eclipse reality” for social media users. Yet, because it is perhaps the easiest to see, we will discuss it briefly here. The basic idea is that when one sets out to behave in such ways or perform such actions so as to contribute to their “highlight reel,” then one has allowed the means of technological mediation to become an end in itself.
In our book one of the ways we discussed how interface issues eclipse relations to others is in terms of procreation. Again, the existentialist Gabriel Marcel is an excellent source (cf. Marcel 1962). The idea is that one may direct the lives of their children inauthentically or even be motivated to technologically mediate various (“functionalized”) aspects of procreation itself (cf. Scalambrino 2017), being influenced by the presence and power of technological mediation. As existentialists like Marcel warn, however, there is at least a twofold trouble here. First, technologically mediating one’s relation to procreation is cybernetic insofar as it treats procreation as if it were completely functionalizable. Second, the ends toward which one may direct one’s children through technological mediation may, in fact, derive from means such as “interface issues.” In this way philosophical criticisms of technological mediation have gone so far as to suggest one’s relation to reality may be eclipsed.
Dechert, Charles R., editor. The Social Impact of Cybernetics. South Bend, IN: University of Notre Dame Press, 1966.
Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press, 2008.
Marcel, Gabriel. The Mystery of Being, Volume I: Reflection and Mystery. Translated by G. S. Fraser. South Bend: St. Augustine’s Press, 1950.
Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19 (1962): 27-38.
Scalambrino, Frank. “Futurology in Terms of the Bioethics of Genetic Engineering: Proactionary and Precautionary Attitudes Toward Risk with Existence in the Balance.” Social Epistemology & Futurology: Future of Future Generations. London: Roman & Littlefield International. 2017, in press.
Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt, 2016.
Scalambrino, Frank “Introduction: Publicizing the Social Effects of Technological Mediation,.” In Social Epistemology & Technology, edited by Frank Scalambrino, 1-12. London: Roman & Littlefield International, 2015a.
Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology & Technology, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.
Vallor, Shannon. “Flourishing on Facebook: Virtue Friendship & New Social Media.” Ethics and Information Technology, 14, no. 3 (2012): 185-199.
Vallor, Shannon. “Shannon Vallor Wins 2015 World Technology Award in Ethics.” 2015. https://www.scu.edu/news-and-events/press-releases/2016/january-2016/shannon-vallor-wins-2015-world-technology-award-in-ethics.html.
Vallor, Shannon. “Review of Social Epistemology and Technology.” Notre Dame Philosophical Reviews: An Electronic Journal. (August 4th 2016).
Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.
Leave a Reply