Archives For emerging technology

Author Information: Jim Collier, Virginia Tech, jim.collier@vt.edu.

Collier, James H. “Social Epistemology for the One and the Many: An Essay Review.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 15-40.

Jim Collier’s article “Social Epistemology for the One and the Many” will be published in four parts. The pdf of the article includes all four parts as a single essay, and gives specific page references. Shortlinks:

Introduction: https://wp.me/p1Bfg0-3ZN

Part One, Social Epistemology as Fullerism: https://wp.me/p1Bfg0-3ZY

Part Two, Impoverishing Critical Engagement: https://wp.me/p1Bfg0-402

Part Three, We’re All Californians Now: https://wp.me/p1Bfg0-3ZR

Fuller’s recent work has explored the nature of technological utopia.
Image by der bobbel via Flickr / Creative Commons

 

Third, Remedios and Dusek submit to a form of strict technological determinism as promulgated in the Californian ideology (Barbrook and Cameron 1996), packaged by Ray Kurzweil, and amplified by Fuller in his “trilogy on transhumanism” (vii). Such determinism leaves unexamined the questionable, if not ridiculous, claims made on behalf of transhumanism, generally, and in Fuller’s “own promethean project of transhumanism” (99).

Of Technological Ontology

Missing in the list delineating Fuller’s “extensive familiarity” (10) with an unbelievable array of academic fields and literatures are the history and the philosophy of technology. (As history, philosophy, and “many other fields” make the list, perhaps I am being nitpicky.) Still, I want to highlight, by way of contrast, what I take as a significant oversight in Remedios and Dusek’s account of Fullerism—a refined conception of technology; hence, a capitulation to technological determinism.

Remedios and Dusek do not mention technological determinism. Genetic determinism (69) and Darwinian determinism (75, 77-78) receive brief attention. A glossary entry for “determinism” (143) focuses on Pierre-Simon Laplace’s work. However, the strict technological determinism on which Fullerism stands goes unmentioned. With great assuredness, Remedios and Dusek repeat Ray Kurzweil’s Singularity mantra, with a Fullerian inflection, that: “converging technologies, such as biotechnology, nanotechnology, and computer technology, are transforming and enhancing humanity to humanity 2.0” (33).[1] Kurzweil’s proclamations, and Fuller’s conceptual piggybacking, go absent scrutiny. Unequivocally, a day will come in 2045 when humans—some humans at least—“will be transformed through technology to humanity 2.0, into beings that are Godlike” (94).

The “hard determinism” associated with Jacques Ellul in The Technological Society (1964), and, I argue, with Fuller as relayed by Remedios and Dusek, holds that technology acts as an uncontrollable force independent from social authority. Social organization and action derive from technological effects. Humans have no freedom in choosing the outcome of technological development—technology functions autonomously.

Depending on the relative “hardness” of the technological determinism on offer we can explain social epistemology, for example, as a system of thought existing for little reason other than aiding a technological end (like achieving humanity 2.0). Specifically, Fuller’s social and academic policies exist to assure a transhuman future. A brief example:

How does the university’s interdisciplinarity linked [sic] to transhumanism? Kurzweil claims that human mind and capacities can be uploaded into computers with increase in computing power [sic]. The problem is integration of those capacities and personal identity. Kurzweil’s Singularity University has not been able to address the problem of integration. Fuller proposes transhumanities promoted by university 2.0 for integration by the transhumanist. (51)

As I understand the passage, universities should develop a new interdisciplinary curriculum, (cheekily named the transhumanities) given the forthcoming technological ability to upload human minds to computers. Since the uploading process will occur, we face a problem regarding personal identity (seemingly, how we define or conceive personal identity as uploaded minds). The new curriculum, in a new university system, will speak to issues unresolved by Singularity University—a private think tank and business incubator.[2]

I am unsure how to judge adequately such reasoning, particularly in light of Remedios and Dusek’s definition of agent-oriented epistemology and suspicion of expertise. Ray Kurzweil, in the above passage and throughout the book, gets treated unreservedly as an expert. Moreover, Remedios and Dusek advertise Singularity University as a legitimate institution of higher learning—absent the requisite critical attitude toward the division of intellectual labor (48, 51).[3] Forgiving Remedios and Dusek for the all too human (1.0) sin of inconsistency, we confront the matter of how to get at their discussion of interdisciplinarity and transhumanism.

Utopia in Technology

Remedios and Dusek proceed by evaluating university curricula based on a technologically determined outcome. The problem of individual identity, given that human minds will be uploaded into computers, gets posed as a serious intellectual matter demanding a response from the contemporary academy. Moreover, the proposed transhumanities curriculum gets saddled with deploying outmoded initiatives, like interdisciplinarity, to render new human capacities with customary ideas of personal identity.

University 2.0, then, imagines inquiry into human divinity within a retrograde conceptual framework. This reactive posture results from the ease in accepting what must be. A tributary that leads back to this blithe acceptance of the future comes in the techno-utopianism of the Californian ideology.

The Californian ideology (Barbrook and Cameron 1996) took shape as digital networking technologies developed in Silicon Valley spread throughout the country and the world. Put baldly, the Californian ideology held that digital technologies would be our political liberators; thus, individuals would control their destinies. The emphasis on romantic individualism, and the quest for unifying knowledge, shares great affinity with the tenor of agent-oriented epistemology.

The Californian ideology fuses together numerous elements—entrepreneurialism, libertarianism, individualism, techno-utopianism, technological determinism—into a more or less coherent belief system. The eclecticism of the ideology—the dynamic, dialectical blend of left and right politics, well-heeled supporters, triumphalism, and cultishness—conjures a siren’s call for philosophical relevance hunting, intervention, and mimicry.

I find an interesting parallel in the impulse toward disembodiment by Kurzweil and Fuller, and expressed in John Perry Barlow’s “A Declaration of the Independence of Cyberspace” (1996). Barlow waxes lyrically: “Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge.”

The demigod Prometheus makes appearances throughout Knowing Humanity in the Social World. Remedios and Dusek have Fuller play the rebel trickster and creator. Fuller’s own transhumanist project creates arguments, policies, and philosophical succor that advocate humanity’s desire to ascend to godhood (7, 67). In addition, Fuller’s Promethean task possesses affinities with Russian cosmism (97-99), a project exploring human enhancement, longevity (cryonics), and space travel.[4] Fuller’s efforts result in more or less direct, and grandiose, charges of Gnosticism. Gnosticism, a tangled doctrine, can refer to the Christian heresy of seeking secret knowledge that, in direct association with the divine, allows one to escape the fetters of our lesser material world.

Gnostic Minds

Befitting a trickster, Fuller both accepts and rejects the charge of Gnosticism (102), the adjudication of which seems particularly irrelevant in the determinist framework of transhumanism. A related and distressing sense of pretense pervades Remedios and Dusek’s summary of Gnosticism, and scholastic presentation of such charges against Fuller. Remedios and Dusek do more than hint that such disputations involving Fuller have world historic consequences.

Imitating many futurists, Fuller repeats that “we are entering a new historical phase” (xi) in which our understanding of being human, of being an embodied human particularly, shifts how we perceive protections, benefits, and harms to our existence. This common futurist refrain, wedded to a commonsense observation, becomes transmogrified by the mention of gnosis (and the use of scare quotes):

The more we relativize the material conditions under which a “human” existence can occur, the more we shall also have to relativize our sense of what counts as benefits and harms to that existence. In this respect, Gnosticism is gradually being incorporated into our natural attitude toward the secular world. (xi)

Maybe. More likely, and less heroically, humans regularly reconsider who they are and determine what helps or hurts them absent mystical knowledge in consultation with the divine. As with many of Fuller’s broader claims, and iterations of such claims presented by Remedios and Dusek, I am uncertain how to judge the contention about the rise of Gnosticism as part of being in the world. Such a claim comes across as unsupported, certainly, and self-serving given the argument at hand.

The discussion of Gnosticism raises broader issues of how to understand the place, scope and meaningfulness of the contestations and provocations in which Fuller participates. Remedios and Dusek relay a sense that Fuller’s activities shape important social debates—Kitzmiller being a central example.[5] Still, one might have difficulty locating the playing field where Gnosticism influences general attitudes to matters either profane or sacred. How, too, ought we entertain Fuller’s statements that “Darwinism erodes the motivations of science itself” or “Darwin may not be a true scientist” (71)?

At best, these statements seem merely provocative; at worst, alarmingly incoherent. At first, Remedios and Dusek adjudicate these claims by reminding the reader of Fuller’s “sweeping historical and philosophical account” and “more sophisticated and historically informed version” (71) of creationism. Even when Fuller’s wrong, he’s right.

In this case, we need only accept the ever-widening parameters of Fuller’s historical and philosophical learning, and suspend judgment given the unresolved lessons of his ceaseless dialectic. Remedios and Dusek repeatedly make an appeal to authority (argumentum ad verecundiam) and, in turn, set social epistemology on a decidedly anti-intellectual footing. In part, such footing and uncritical attitude seems necessary to entertain Fuller’s “own promethean project of transhumanism” (99).

Transhuman Dialectic

Fuller’s Promethean efforts aside, transhumanism strives to maintain the social order in the service of power and money. A guiding assumption in the desire to transcend human evolution and embodiment involves who wins, come some form of end time (or “event”), and gets to take their profits with them. Douglas Rushkoff (2018) puts the matter this way:

It’s a reduction of human evolution to a video game that someone wins by finding the escape hatch and then letting a few of his BFFs come along for the ride. Will it be Musk, Bezos, Thiel…Zuckerberg? These billionaires are the presumptive winners of the digital economy — the same survival-of-the-fittest business landscape that’s fueling most of this speculation to begin with.[6] (https://bit.ly/2MRgeIw)

Fuller’s staging of endless dialectic—his ceaseless provocations (and attendant insincerity), his flamboyant exercises in rehabilitating distasteful and dangerous ideas—drives him to distraction. We need look no further than his misjudgment of transhumanism’s sociality. The contemporary origins of the desire to transcend humanity do not reside with longing to know the mind of god. Those origins reside with Silicon Valley neoliberalism and the rather more profane wish to keep power in heaven as it is on earth.

Fuller’s transhumanism resides with the same type of technological determinism as other transhumanist dialects and Kuzweil’s Singularity. A convergence, in some form, of computers, genetics, nanotechnology, robotics and artificial intelligence leads inevitably to artificial superintelligence. Transhumanism depends on this convergence. Moore’s Law, and Kurzweil’s Law of Accelerating Returns, will out.

This hard determinism renders practically meaningless—aside from fussiness, a slavish devotion to academic productivity, or perverse curiosity—the need for proactionary principles, preparations for human enhancement or alternative forms of existence, or the vindication of divine goodness. Since superintelligence lies on the horizon, what purpose can relitigating the history of eugenics, or enabling human experimentation, serve? [7] Epistemic agents can put aside their agency. Kurzweil asserts that skepticism and caution now threaten “society’s interests” (Pein 2017, 246). Remedios and Dusek portray Fuller as having the same disturbing attitude.

At the end of Knowing Humanity in the Social World, comes a flicker of challenge:

Fuller is totally uncritical about the similarly of utopian technologists’ and corporate leaders’ positions on artificial intelligence, synthetic biology, and space travel. He assumes computers can replace human investigators and allow the uploading of human thought and personality. However, he never discusses and replies to the technical and philosophical literature that claims there are limits to what is claimed can be achieved toward strong artificial intelligence, or with genetic engineering. (124)

A more well-drawn, critical epistemic agent would begin with normative ‘why’ and ‘how’ questions regarding Fuller’s blind spot and our present understanding of social epistemology.  Inattention to technological utopianism and determinism does not strike me as a sufficient explanation—although the gravity of fashioning such grand futurism remains strong—for Fuller’s approach. Of course, the “blind spot” to which I point may be nothing of the sort. We should, then, move out of the way and pacify ourselves by constructing neo-Kantian worlds, while our technological and corporate betters make space for the select to occupy.

The idea of unification, of the ability of the epistemic agent to unify knowledge in terms of their “worldview and purposes,” threads throughout Remedios and Dusek’s book. Based on the book, I cannot resolve social epistemology pre- and post- the year 2000. Agent-oriented epistemology assumes yet another form of determinism. Remedios and Dusek look more than two centuries into our past to locate a philosophical language to speak to our future. Additionally, Remedios and Dusek render social epistemology passive and reliant on the Californian political order. If epistemic unification appears only at the dawn of a technologically determined future, we are automatons—no longer human.

Conclusion

Allow me to return to the question that Remedios and Dusek propose as central to Fuller’s metaphysically-oriented, post-2000, work: “What type of being should the knower be” (2)? Another direct (and undoubtedly simplistic) answer—enhanced. Knowers should be technologically enhanced types of beings. The kinds of enhancements on which Remedios and Dusek focus come with the convergence of biotechnology, nanotechnology, and computer technology and, so, humanity 2.0.

Humanity 2.0’s sustaining premise begins with yet another verse in the well-worn siren song of the new change, of accelerating change, of inevitable change. It is the call of Silicon Valley hucksters like Ray Kurzweil.[8] One cannot deny that technological change occurs. Still, a more sophisticated theory of technological change, and the reciprocal relation between technology and agency, seems in order. Remedios and Dusek and Fuller’s hard technological determinism cries out for reductionism. If a technological convergence occurs and super-intelligent computers arise what purpose, then, in preparing by using humanity 1.0 tools and concepts?

Why would this convergence, and our subsequent disembodied state, not also dictate, or anticipate, even revised ethical categories (ethics 2.0, 109), government programs (welfare state 2.0, 110), and academic institutions (university 2.0, 122)? Such “2.0 thinking,” captive to determinism, would be quaint if not for very real horrors of endorsing eugenics and human experimentation. The unshakeable assuredness of the technological determinism at the heart Fuller’s work denies the consequences, if not the risk itself, for the risks epistemic agents “must” take.

In 1988, Steve Fuller asked a different question: How should we organize and pursue knowledge collectively? [9] This question assumes that human beings have cognitive limitations, limitations that might be ameliorated by humans acting in helpful concert to change society and ourselves. As a starting point, befitting the 1980’s, Fuller sought answers in “knowledge bearing texts” and an expansive notion of textual technologies and processes. This line of inquiry remains vital. But neither the question, nor social epistemology, belongs solely to Steve Fuller.

Let me return to an additional question. “Is Fuller the super-agent?” (131). In the opening of this essay, I took Remedios’s question as calling back to hyperbole about Fuller in the book’s opening. Fuller does not answer the question directly, but Knowing Humanity in the Social World does—yes, Steve Fuller is the super-agent. While Remedios and Dusek do not yet attribute godlike qualities to Fuller, agent-oriented epistemology is surely created in his image—an image formed, if not anticipated, by academic charisma and bureaucratic rationality.

As the dominant voice and vita in the branch of social epistemology of Remedios and Dusek’s concern, Fuller will likely continue to set the agenda. Still, we might harken back to the more grounded perspective of Jesse Shera (1970) who helped coin the term social epistemology. Shera defines social epistemology as:

The study of knowledge in society. It should provide a framework for the investigation of the entire complex problem of the nature of the intellectual process in society; the study of the ways in which society as a whole achieves a perceptive relation to its total environment. It should lift the study of the intellectual life from that of scrutiny of the individual to an enquiry into the means by which a society, nation, of culture achieve an understanding of stimuli which act upon it … a new synthesis of the interaction between knowledge and social activity, or, if you prefer, social dynamics. (86)

Shera asks a great deal of social epistemology. It is good work for us now. We need not await future gods.

An Editorial Note

Palgrave Macmillian do the text no favors. We too easily live with our complicity—publishing houses, editors, universities, and scholars alike—to think of scholarship only as output—the more, the faster, the better. This material and social environment influences our notions of social epistemology and epistemic agency in significant ways addressed indirectly in this essay. For Remedios and Dusek, the rush to press means that infelicitous phrasing and cosmetic errors run throughout the text. The interview between Remedios and Fuller needs another editorial pass. Finally, the book did not integrate the voices of its co-authors.

Contact details: jim.collier@vt.edu

References

Barbrook, Richard and Andy Cameron. “The Californian Ideology.” Science as Culture 6, no. 1 (1996): 44-72.

Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” 1996. https://bit.ly/1KavIVC.

Barron, Colin. “A Strong Distinction Between Humans and Non-humans Is No Longer Required for Research Purposes: A Debate Between Bruno Latour and Steve Fuller.” History of the Human Sciences 16, no. 2 (2003): 77–99.

Clark, William. Academic Charisma and the Origins of the Research University. University of Chicago Press, 2007.

Ellul, Jacques. The Technological Society. Alfred A. Knopf, 1964.

Frankfurt, Harry G. On Bullshit. Princeton University Press, 2005.

Fuller, Steve. Social Epistemology. Bloomington and Indianapolis, University of Indiana Press, 1988.

Fuller, Steve. Philosophy, Rhetoric, and the End of Knowledge: The Coming of Science and Technology Studies. Madison, WI: University of Wisconsin Press, 1993.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2001.

Fuller, Steve. “The Normative Turn: Counterfactuals and a Philosophical Historiography of Science.” Isis 99, no. 3 (September 2008): 576-584.

Fuller, Steve. “A Response to Michael Crow.” Social Epistemology Review and Reply Collective 25 November 2015. https://goo.gl/WwxFmW.

Fuller, Steve and Luke Robert Mason. “Virtual Futures Podcast #3: Transhumanism and Risk, with Professor Steve Fuller.”  Virtual Futures 16 August 2017. https://bit.ly/2mE8vCs.

Grafton, Anthony. “The Nutty Professors: The History of Academic Charisma.” The New Yorker October 26, 2006. https://bit.ly/2mxOs8Q.

Hinchman, Edward S. “Review of “Patrick J. Reider (ed.), Social Epistemology and Epistemic Agency: Decentralizing Epistemic Agency.” Notre Dame Philosophical Reviews 2 July 2018. https://ntrda.me/2NzvPgt.

Horgan, John. “Steve Fuller and the Value of Intellectual Provocation.” Scientific American, Cross-Check 27 March 2015.  https://bit.ly/2f1UI5l.

Horner, Christine. “Humanity 2.0: The Unstoppability of Singularity.” Huffpost 8 June 2017. https://bit.ly/2zTXdn6.

Joosse, Paul.“Becoming a God: Max Weber and the Social Construction of Charisma.” Journal of Classical Sociology 14, no. 3 (2014): 266–283.

Kurzweil, Ray. “The Virtual Book Revisited.”  The Library Journal 1 February 1, 1993. https://bit.ly/2AySoQx.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Penguin Books, 2005.

Lynch, Michael. “From Ruse to Farce.” Social Studies of Science 36, vol 6 (2006): 819–826.

Lynch, William T. “Social Epistemology Transformed: Steve Fuller’s Account of Knowledge as a Divine Spark for Human Domination.” Symposion 3, vol. 2 (2016): 191-205.

McShane, Sveta and Jason Dorrier. “Ray Kurzweil Predicts Three Technologies Will Define Our Future.” Singularity Hub 19 April 2016. https://bit.ly/2MaQRl4.

Pein, Corey. Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley. Henry Holt and Co. Kindle Edition, 2017.

Remedios, Francis. Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Lexington Books, 2003.

Remedios, Francis X. and Val Dusek. Knowing Humanity in the Social World: The Path of Steve Fuller’s Social Epistemology. Palgrave Macmillan UK, 2018.

Rushkoff, Douglas. “Survival of the Richest: The wealthy are plotting to leave us behind.” Medium 5 July 2018. https://bit.ly/2MRgeIw.

Shera, J.H. Sociological Foundations of Librarianship. New York: Asia Publishing House, 1970.

Simonite, Tom. “Moore’s Law Is Dead. Now What?” MIT Technology Review 13 May 13, 2016. https://bit.ly/1VVn5CK.

Talbot, Margaret. “Darwin in the Dock.” The New Yorker December 5, 2005. 66-77. https://bit.ly/2LV0IPa.

Uebel, Thomas. Review of “Francis Remedios, Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Notre Dame Philosophical Reviews 3 March 2005. https://ntrda.me/2uT2u92

Weber, Max. Economy and Society, 2 vols. Edited by Guenther Roth and Claus Wittich. Berkeley, CA; London; Los Angeles, CA: University of California Press, 1922 (1978).

[1] “Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. Of his 147 predictions since the 1990s, Kurzweil claims an 86 percent accuracy rate. At the SXSW Conference in Austin, Texas, Kurzweil made yet another prediction: the technological singularity will happen sometime in the next 30 years” (https://bit.ly/2n8oMkM). I must admit to a prevailing doubt (what are the criteria?) regarding Kurzweil’s “86 percent accuracy rate.” I further admit that the specificity of number itself—86—seems like the kind of exact detail to which liars resort.

[2] Corey Pein (2017, 260-261) notes: “It was eerie how closely the transhuman vision promoted by Singularity University resembled the eugenicist vision that had emerged from Stanford a century before. The basic arguments had scarcely changed. In The Singularity Is Near, SU chancellor Kurzweil decried the ‘fundamentalist humanism’ that informs restriction on the genetic engineering of human fetuses.”

[3] Pein (2017, 200-201) observes: “… I saw a vast parking lot ringed by concrete barriers and fencing topped with barbed wire. This was part of the federal complex that housed the NASA Ames Research Center and a strange little outfit called Singularity University, which was not really a university but more like a dweeby doomsday congregation sponsored by some of the biggest names in finance and tech, including Google. The Singularity—a theoretical point in the future when computational power will absorb all life, energy, and matter into a single, all-powerful universal consciousness—is the closest thing Silicon Valley has to an official religion, and it is embraced wholeheartedly by many leaders of the tech industry.”

[4] Remedios and Dusek claim: “Cosmist ideas, advocates, and projects have continued in contemporary Russia” (98), but do little to allay the reader’s skepticism that Cosmism has little current standing and influence.

[5] In December 2006, Michael Lynch offered this post-mortem on Fuller’s participation in Kitzmiller: “It remains to be seen how much controversy Fuller’s testimony will generate among his academic colleagues. The defendants lost their case, and gathering from the judge’s ruling, they lost resoundingly … Fuller’s testimony apparently left the plaintiff’s arguments unscathed; indeed, Judge John E. Jones III almost turned Fuller into a witness for the plaintiffs by repeatedly quoting statements from his testimony that seemed to support the adversary case … Some of the more notable press accounts of the trial also treated Fuller’s testimony as a farcical sideshow to the main event [Lynch references Talbot, see above footnote 20] … Though some of us in science studies may hope that this episode will be forgotten before it motivates our detractors to renew the hostility and ridicule directed our way during the ‘science wars’ of the 1990s … in my view it raises serious issues that are worthy of sustained attention” (820).

[6] Fuller’s bet appears to be Peter Thiel.

[7] Remedios and Dusek explain: “The provocative Fuller defends eugenics and thinks it should not be rejected though stigmatized because of its application by the Nazis” (emphasis mine, 116-117). While adding later in the paragraph “… if the [Nazi] experiments really do contribute to scientific knowledge, the ethical and utilitarian issues remain” (117), Remedios and Dusek ignore the ethical issues to which they gesture. Tellingly, Remedios and Dusek toggle back to a mitigating stance in describing “Cruel experiments that did have eventual medical payoff were those concerning the testing of artificial blood plasmas on prisoners of war during WWII …” (117).

[8] “Ray Kurzweil is a genius. One of the greatest hucksters of the age …” (PZ Myers as quoted in Pein 2017, 245). From Kurzweil (1993): “One of the advantages of being in the futurism business is that by the time your readers are able to find fault with your forecasts, it is too late for them to ask for their money back.”

[9]  I abridged Fuller’s (1988, 3) fundamental question: “How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degree of access to one another’s activities?”

Author Information: Alcibiades Malapi-Nelson, Seneca College, alci.malapi@outlook.com

Malapi-Nelson, Alcibiades. “Transhumanism and the Catholic Church.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 12-17.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3WM

You don’t become the world’s oldest continuing institution without knowing how to adapt to the times.
Image by Lawrence OP via Flickr / Creative Commons.

Most accounts on transhumanism coming from Catholic circles show a mild to radical rejection to the idea of a deep alteration, by means of pervasive emergent technologies, of whatever we understand as “human nature”. These criticisms come from both progressive and conservative Catholic flanks. However, as it is increasingly becoming evident, the left/right divide is no longer capturing ethical, political and philosophical stances in an accurate manner.

There are cross-linked concerns which transcend such traditional dichotomy. The Church, insofar as it also is a human institution, is not immune to this ongoing ‘rotating axis’. The perceived Catholic unfriendliness to transhumanism stems from views that do not take into account the very mission that defines the Church’s existence.

Conceptions of Human Dignity

To be sure, there are aspects of transhumanism that may find fundamental rejection when confronted to Church doctrine—particularly in what concerns human dignity. In this context, attempts for accomplishing indefinite life extension will not find fertile ground in Catholic milieus. Needless to say, the more vulgar aspects of the transhumanist movement—such as the fashionable militant atheism sponsored by some, or the attempt to simply replace religion with technology—would not find sympathy either. However, precisely due to an idiosyncratically Catholic attention to human dignity, attempts at the improvement of the human experience shall certainly attract the attention of the Magisterium.

Perhaps more importantly, and not unrelated to a distinctly Catholic understanding of personal self-realization, the Church will have to cope with the needs that a deeply altered human condition will entail. Indeed, the very cause for the Church to exist is self-admittedly underpinned by the fulfillment of a particular service to humans: Sacrament delivery. Hence, the Magisterium has an ontological interest (i.e., pertaining to what counts as human) in better coping with foreseeable transhumanist alterations, as well as a functional one (e.g., to ensure both proper evangelization and the fulfilling of its sacramental prime directive).

The Church is an institution that thinks, plans and strategizes in terms of centuries. A cursory study of its previous positions regarding the nature of humanity reveals that the idea of “the human” never was a monolithic, static notion. Indeed, it is a fluid one that has been sponsored and defended under different guises in previous eras, pressed by sui-generis apostolic needs. As a guiding example, one could pay attention to the identity-roots of that area of the globe which currently holds more than 60% of the Catholic world population: Latin America. It is well documented how the incipient attempts at an articulation of “human rights”, coming from the School of Salamanca in the 16th century (epitomized by Francisco Vitoria, Francisco Suárez—the Jesuit who influenced Leibnitz, Schopenhauer and Heidegger—and indirectly, by Bartolomé de las Casas), had as an important aspect of its agenda the extension of the notion of humanity to the hominid creatures found inhabiting the “West Indies”—the Americas.

The usual account of Heilsgeschichte (Salvation History), canonically starting with the narrative of the People of God and ending up with the Roman Empire, could not be meaningfully conveyed to this newly-found peoples, given that the latter was locked in an absolutely parallel world. In fact, a novel “theology of charity” had to be developed in order to spread the Good News, without referencing a (non-existent) “common history”. Their absolute humanity had to be thus urgently established, so that, unlike the North American Protestant experience, widespread legalized slavery would not ensue—task which was partly accomplished via the promulgation of the 1538 encyclical Sublimis Deus.

Most importantly, once their humanity was philosophically and legally instituted, the issue regarding the necessary services for both their salvation and their self-development immediately emerged (To be sure, not everyone agreed in such extension of humanity). Spain sent an average of three ‘apostolic agents’ – priests – per day to fulfill this service. The controversial nature of the “Age of Discovery” notwithstanding, the Spanish massive mobilization may partly account for the Church being to this day perhaps the most trusted institution in Latin America. Be that as it may, we can see here a paradigmatic case were the Church extended the notion of humanity to entities with profoundly distinct features, so that it could successfully fulfill its mission: Sacrament delivery. Such move arguably guaranteed the worldwide flourishing, five centuries later, of an institution of more than a billion people.

A Material Divinity

Although the Church emphasises an existing unity between mind and body, it is remarkable that in no current authoritative document of the Magisterium (e.g., Canon Law, Catechism, Vatican Council II, etc.) the “human” is inextricably linked with a determinate corporeal feature of the species homo-sapiens. Namely, although both are profoundly united, one does not depend on the other. In fact, the soul/spirit comes directly from God. What defines us as humans have less to do with the body and its features and more to do with the mind, spirit and will.

Once persons begin to radically and ubiquitously change their physical existences, the Church will have to be prepared to extend the notion of humanity to these hybrids. Not only will these entities need salvation, but they will need to flourish in this life as self-realized individuals—something that according to Catholic doctrine is solidly helped by sacrament reception. Moreover, if widespread deep alteration of humanoid ‘biologies’ were to occur, the Church has a mandate of evangelization to them as well. This will likely encourage apostolic agents to become familiarized with these novel ways of corporeal existence in order to better understand them—even embrace them in order further turn them into vehicles of evangelization themselves.

We have a plethora of historical examples in related contexts, from the Jesuit grammatization of the Inka language to Marshall McLuhan’s prophetic expertise in human communications—having influenced the Second Vatican Council’s Inter Mirifica document on the topic. Indeed, “morphological freedom” (the right and ability to alter our physical existence) might become for the Church what philosophy of communication became for McLuhan.

Thus, chances are that the Church will need to embrace a certain instantiation of a transhuman future, given that the institution will have to cope with a radically changed receptacle of the grace-granting devices – the Sacraments. Indeed, this shall be done in order to be consistent with the reason for its very existence as mandated by Christ: guaranteeing the constant flow of these efficacious means which collaborate towards both a fulfilled existence in this life and salvation in the next one. Steve Fuller foresees a possible scenario that may indeed become just such transhuman ‘instantiation’ favoured by the Church:

A re-specification of the “human” to be substrate-neutral (that is to say, a “human” need not be the descendant of another member of Homo sapiens but rather could be a status conferred on any suitably qualified entity, as might be administered by a citizenship test or even a Turing Test).

Judging from its track record, the Church will problematically but ultimately successfully raise up to the challenge. A substrate-neutral re-specification of the human may indeed be the route taken by the Church—perhaps after a justifiably called Concilium.

An homage to a legendary series of portraits by Francis Bacon.
Image by Phineas Jones via Flickr / Creative Commons

Examining the Sacraments

The challenge will be variously instantiated in correlation with the sacraments to be delivered. However, all seven of them share one feature that will be problematized with the implementation of transhumanist technologies: Sacraments perform metaphysically what they do physically. Their efficacy in the spiritual world is mirrored by the material function performed in this one (e.g., the pouring of water in baptism). Since our bodies may change at a fundamental level, maintaining the efficacy of sacraments, which need physical substrata to work, will be the common problem. Let us see how this problem may variously incarnate.

Baptism. As the current notion of humanity stands (“an entity created in the image and likeness of God”) not much would have to change in order to extend it to an altered entity claiming to maintain, or asking to receive, human status. A deep alteration of our bodies constitutes no fundamental reason for not participating of the realm “human” and thus, enter the Catholic Church by means of Baptism: The obliteration of the legacy of Original Sin with which humans are born—either by natural means, cloned or harvested (A similar reasoning could be roughly applied to Confirmation). Holy water can be poured on flesh, metal or a new alloy constituting someone’s forehead. As indicated above, the Church does not mention “flesh” as a sine qua non condition for humanity to obtain.

On the other hand, there is a scenario, more post-human than transhuman in nature, that may emerge as a side effect out of the attempts to ameliorate the human condition: Good Old Fashion Artificial Intelligence. If entities that share none of the features (bodily, historically, cognitively, biologically) we usually associate with humanity begin to claim human status on account of displaying both rationality and autonomy, then the Church may have to go through one of its most profound “aggiornamentos” in two millennia of operation.

Individual tests administered by local bishops on a case-by-case basis (after a fundamental directive coming from the Holy See) would likely have to be put in place – which would aim to assess, for instance, the sincerity of the entity’s prayer. It is a canonical signature of divine presence in an individual the persistent witnessing of an ongoing metanoia (conversion). A consistent life of self-giving and spiritual warfare could be the required accepted signs for this entity being declared a child of God, equal to the rest of us, granting its entrance into the Church with all the entailing perks (i.e. the full array of sacraments).

There is a caveat that is less problematic for Catholic doctrine than for modern society: Sex assignation. Just as the ‘natural machinery’ already comes with one, the artificial one could have it as well. Male or female could happen also in silico. Failure to do so would carry the issue to realms not dissimilar with current disputes of “sex reassignation” and its proper recognition by society: It might be a problem, but it would not be a new problem. The same reasoning would apply to “post-gender” approaches to transhumanism.

Confession. Given that the sacrament of Reconciliation has to be obligatorily performed, literally, vis à vis, what if environmental catastrophes reduce our physical mobility so that we can no longer face a priest? Will telepresence be accepted by the Church? Will the Church establish strict protocols of encryption? After all it is an actual confession that we are talking about: Only a priest can hear it—and only the Pope, on special cases, can hear it from him.

Breaking the confessional seal entails excommunicatio ipso facto. Moreover, regarding a scenario which will likely occur within our lifetimes, what about those permanently sent into space? How will they receive this sacrament? Finally, even if the Church permanently bans the possibility of going to confession within a virtual environment, what would happen if people eventually inhabit physical avatars? Would that count as being physically next to a priest?

Communion. The most important of all sacraments, the Eucharist, will not the void of issues either. The Latin Rite of the Catholic Church (the portion of Catholics who are properly ‘Roman’) mandates that only unleavened bread shall be used as the physical substratum, so that it later transubstantiates into the body of Christ. The Church is particularly strict in this, as evinced in cases were alternative breads have been used (e.g., when stranded for years on a deserted island), not recognizing those events as properly Eucharistic: the sacrament never took place in such occasions.

Nevertheless, we will have to confront situations were the actual bread could not be sent to remote locations of future human dwelling (e.g., Mars), nor a priest will be present to perform the said metaphysical swapping. Facing this, would nanotechnology provide the solution? Would something coming out of a 3D printer or a future “molecular assembler” qualify as the actual unleavened bread?

Marriage. This sacrament will likely confront two main challenges; one fundamentally novel in nature and the second one an extension of already occurring issues. Regarding the latter, let us take in consideration a particular thread in certain transhumanist circles: The pursuit of indefinite life extension. It is understood that once people either become healthier longer (or stop aging), the creation of new life via offspring may become an after-thought. Canon Law clearly stipulates that those who consciously made a decision not to procreate can not enter this sacrament. In that sense, a children-less society would be constituted by sacramentally unmarried people. Once again, this issue is a variation of already occurring scenarios—which could be extended, for that matter, to sex-reassigned people.

The former challenge mentioned would be unprecedented. Would the Church marry a human and a machine? Bear in mind that this question is fundamentally different from the already occurring question regarding the Church refusing to marry humans and non-human animals. The difference is based upon the lack of autonomy and rationality shown by the latter. However, machines could one day show both (admittedly Kantian) human-defining features. The Church may find in principle no obstacle to marry a human “1.0” and a human “2.0” (or even a human and an artificial human—AI), provided that the humanity of the new lifeforms, following the guidelines established by the requirements for Baptism, is well established.

Holy Orders. As with Marriage, this sacrament will likely face a twist both on an already occurring scenario and a fairly new one. On the one hand, the physical requirement of a bishop actually posing his hands on someone’s head to ordain him a priest, has carried problematic cases for the Church (e.g., during missions where bishops were not available). With rare exceptions, this requirement has always been observed. A possible counter case is the ordination of Stylite monks between the 3rd and 6th century. These hermits made vows to not come down from their solitary pillar until death.

Reportedly, sometimes bishops ordained them via an “action at a distance” of sorts—but still from merely a few meters away. The Church will have to establish whether ordaining someone via telepresence (or inhabiting an avatar) would count as sacramentally valid. On the other hand, the current requirement for a candidate for priesthood to have all his limbs—particularly his hands—up until the moment of ordination might face softening situations. At the moment where a prosthetic limb not only seamlessly becomes an extension of the individual, but a better functional extension of him, the Church may reconsider this pre-ordination requirement.

Extreme Unction. The Last Rites will likely confront two challenges in a transhuman world. One would not constitute properly a problem for its deliverance, but rather a questioning of the point of its existence. The other will entail a possible redefinition of what is considered to be ‘dead’. In what refers to the consequences of indefinite life extension, this sacrament may be considered by Catholics what Protestants consider of the sacraments (and hence of the Church): Of no use. Perhaps the sacrament would stay put for those who choose to end their lives “naturally” (in itself a problem for transhumanists: What to do with those who do not want to get “enhanced”?) Or perhaps the Church will simply ban this particular transhumanist choice of life for Catholics, period—as much as it now forbids euthanasia and abortion. The science fiction series Altered Carbon portrays a future where such is the case.

On the other hand, the prospect of mind uploading may push to redefine the notion of what it means to leave this body, given that such experience may not necessarily entail death. If having consciousness inside a super-computer is defined as being alive—which as seen above may be in principle accepted by the Church—then the delivery of the sacrament would have to be performed without physicality, perhaps via a link between the software-giver and the software-receiver. This could even open up possibilities for sacrament-delivery to remote locations.

The Future of Humanity’s Oldest Institution

As we can see, the Church may not have to just tolerate, but actually embrace, the transhumanist impulses slowly but steadily pushed by science and technology into the underpinnings of the human ethos. This attitude shall emerge motivated by two main sources: On the one hand, a fundamental option towards the development of human dignity—which by default would associate the Church more to a transhumanist philosophy than to a post-human one.

On the other, a fundamental concern for the continuing fulfilling of its own mission and reason of existence—the delivery of sacraments to a radically altered human recipient. As a possible counterpoint, it has been surmised that Pope Francis’ is one of the strongest current advocates for a precautionary stance—a position being traditionally associated with post-human leanings. The Pontiff’s Laudato Si encyclical on the environment certainly seems to point to this direction. That may be part of a—so far seemingly successful—strategy put in place by the Church for decades to come, whose reasons escape the scope of this piece. However, as shown above, the Church, given its own history, philosophy, and prime mandate, has all the right reasons to embrace a transhuman future—curated the Catholic way, that is.

Contact details: alci.malapi@outlook.com

References

Fuller, Steve. “Ninety Degree Revolution.” Aeon Magazine. 20 October 2013. Retrieved from https://aeon.co/essays/left-and-right-are-over-the-future-is-up-and-down.

Fuller, Steve. “Which Way Is Up for the Human Condition?” ABC Religion and Ethics. 26 August 2015. Retrieved from http://www.abc.net.au/religion/articles/2015/08/26/4300331.htm.

Fuller, Steve. “Beyond Good and Evil: The Challenges of Trans- and Post-Humanism.” ABC Religion and Ethics. 20 December 2016. Retrieved from http://www.abc.net.au/religion/articles/2016/12/20/4595400.htm.

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US

Image by Stu Jones via CJ Sorg on Flickr / Creative Commons

 

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Vallor breaks the work into three parts, and takes as her subject what she considers to be the four major world-changing technologies of the 21st century.  The book’s three parts are, “Foundations for a Technomoral Virtue Ethic,” “Cultivating the Self: Classical Virtue Traditions as Contemporary Guide,” and “Meeting the Future with Technomoral Wisdom, OR How To Live Well with Emerging Technologies.” The four world changing technologies, considered at length in Part III, are Social Media, Surveillance, Robotics/Artificial Intelligence, and Biomedical enhancement technologies.[2]

As Vallor moves through each of the three sections and four topics, she maintains a constant habit of returning to the questions of exactly how each one will either help us cultivate a new technomoral virtue ethic, or how said ethic would need to be cultivated, in order to address it. As both a stylistic and pedagogical choice, this works well, providing touchstones of reinforcement that mirror the process of intentional cultivation she discusses throughout the book.

Flourishing and Technology

In Part I, “Foundations,” Vallor covers both the definitions of her terms and the argument for her project. Chapter 1, “Virtue Ethics, Technology, and Human Flourishing,” begins with the notion of virtue as a continuum that gets cultivated, rather than a fixed end point of achievement. She notes that while there are many virtue traditions with their own ideas about what it means to flourish, there is a difference between recognizing multiple definitions of flourishing and a purely relativist claim that all definitions of flourishing are equal.[3] Vallor engages these different understandings of flourishing, throughout the text, but she also looks at other ethical traditions, to explore how they would handle the problem of technosocial opacity.

Without resorting to strawmen, Vallor examines The Kantian Categorical Imperative and Utilitarianism, in turn. She demonstrates that Kant’s ethics would result in us trying to create codes of behavior that are either always right, or always wrong (“Never Murder;” “Always Tell the Truth”), and Utilitarian consequentialism would allow us to make excuses for horrible choices in the name of “the Greater Good.” Which is to say nothing of how nebulous, variable, and incommensurate all of our understandings of “utility” and “good” will be with each other. Vallor says that rigid rules-based nature of each of these systems simply can’t account for the variety of experiences and challenges humans are likely to face in life.

Not only that, but deontological and consequentialist ethics have always been this inflexible, and this inflexibility will only be more of a problem in the face of the challenges posed by the speed and potency of the four abovementioned technologies.[4] Vallor states that the technologies of today are more likely to facilitate a “technological convergence,” in which they “merge synergistically” and become more powerful and impactful than the sum of their parts. She says that these complex, synergistic systems of technology cannot be responded to and grappled with via rigid rules.[5]

Vallor then folds in discussion of several of her predecessors in the philosophy of technology—thinkers like Hans Jonas and Albert Borgmann—giving a history of the conceptual frameworks by which philosophers have tried to deal with technological drift and lurch. From here, she decides that each of these theorists has helped to get us part of the way, but their theories all need some alterations in order to fully succeed.[6]

In Chapter 2, “The Case for a Global Technomoral Virtue Ethic,” Vallor explores the basic tenets of Aristotelian, Confucian, and Buddhist ethics, laying the groundwork for the new system she hopes to build. She explores each of their different perspectives on what constitutes The Good Life in moderate detail, clearly noting that there are some aspects of these systems that are incommensurate with “virtue” and “good” as we understand them, today.[7] Aristotle, for instance, believed that some people were naturally suited to be slaves, and that women were morally and intellectually inferior to men, and the Buddha taught that women would always have a harder time attaining the enlightenment of Nirvana.

Rather than simply attempting to repackage old ones for today’s challenges, these ancient virtue traditions can teach us something about the shared commitments of virtue ethics, more generally. Vallor says that what we learn from them will fuel the project of building a wholly new virtue tradition. To discuss their shared underpinnings, she talks about “thick” and “thin” moral concepts.[8] A thin moral concept is defined here as only the “skeleton of an idea” of morality, while a thick concept provides the rich details that make each tradition unique. If we look at the thin concepts, Vallor says, we can see the bone structure of these traditions is made of 4 shared commitments:

  • To the Highest Human Good (whatever that may be);
  • That moral virtues understood to be cultivated states of character;
  • To a practical path of moral self-cultivation; and
  • That we can have a conception of what humans are generally like.[9]

Vallor uses these commitments to build a plausible definition of “flourishing,” looking at things like intentional practice within a global community toward moral goods internal to that practice, a set of criteria from Alasdair MacIntyre which she adopts and expands on, [10] These goals are never fully realized, but always worked toward, and always with a community. All of this is meant to be supported by and to help foster goods like global community, intercultural understanding, and collective human wisdom.

We need a global technomoral virtue ethics because while the challenges we face require ancient virtues such as courage and charity and community, they’re now required to handle ethical deliberations at a scope the world has never seen.

But Vallor says that a virtue tradition, new or old, need not be universal in order to do real, lasting work; it only needs to be engaged in by enough people to move the global needle. And while there may be differences in rendering these ideas from one person or culture to the next, if we do the work of intentional cultivation of a pluralist ethics, then we can work from diverse standpoints, toward one goal.[11]

To do this, we will need to intentionally craft both ourselves and our communities and societies. This is because not everyone considers the same goods as good, and even our agreed-upon values play out in vastly different ways when they’re sought by billions of different people in complex, fluid situations.[12] Only with intention can we exclude systems which group things like intentional harm and acceleration of global conflict under the umbrella of “technomoral virtues.”

Cultivating Techno-Ethics

Part II does the work of laying out the process of technomoral cultivation. Vallor’s goal is to examine what we can learn by focusing on the similarities and crucial differences of other virtue traditions. Starting in chapter 3, Vallor once again places Aristotle, Kongzi (Confucius), and the Buddha in conceptual conversation, asking what we can come to understand from each. From there, she moves on to detailing the actual process of cultivating the technomoral self, listing seven key intentional practices that will aid in this:

  • Moral Habituation
  • Relational Understanding
  • Reflective Self-Examination
  • Intentional Self-Direction of Moral Development
  • Perceptual Attention to Moral Salience
  • Prudential Judgment
  • Appropriate Extension of Moral Concern[13]

Vallor moves through each of these in turn, taking the time to show how each step resonates with the historical virtue traditions she’s used as orientation markers, thus far, while also highlighting key areas of their divergence from those past theories.

Vallor says that the most important thing to remember is that each step is a part of a continual process of training and becoming; none of them is some sort of final achievement by which we will “become moral,” and some are that less than others. Moral Habituation is the first step on this list, because it is the quality at the foundation of all of the others: constant cultivation of the kind of person you want to be. And, we have to remember that while all seven steps must be undertaken continually, they also have to be undertaken communally. Only by working with others can we build systems and societies necessary to sustain these values in the world.

In Chapter 6, “Technomoral Wisdom for an Uncertain Future,” Vallor provides “a taxonomy of technomoral virtues.”[14] The twelve concepts she lists—honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom—are not intended to be an exhaustive list of all possible technomoral virtues.

Rather, these twelve things together form system by which to understand the most crucial qualities for dealing with our 21st century lives. They’re all listed with “associated virtues,” which help provide a boarder and deeper sense of the kinds of conceptual connections we can achieve via relational engagement with all virtues.[15] Each member of the list should support and be supported by not only the other members, but also any as-yet-unknown or -undiscovered virtues.

Here, Vallor continues a pattern she’s established throughout the text of grounding potentially unfamiliar concepts in a frame of real-life technological predicaments from the 20th or 21st century. Scandals such as Facebook privacy controversies, the flash crash of 2010, or even the moral stances (or lack thereof) of CEO’s and engineers are discussed with a mind toward highlighting the final virtue: Technomoral Wisdom.[16] Technomoral Wisdom is a means of being able to unify the other virtues, and to understand the ways in which our challenges interweave with and reflect each other. In this way we can both cultivate virtuous responses within ourselves and our existing communities, and also begin to more intentionally create new individual, cultural, and global systems.

Applications and Transformations

In Part III, Vallor puts to the test everything that we’ve discussed so far, placing all of the principles, practices, and virtues in direct, extensive conversation with the four major technologies that frame the book. Exploring how new social media, surveillance cultures, robots and AI, and biomedical enhancement technologies are set to shape our world in radically new ways, and how we can develop new habits of engagement with them. Each technology is explored in its own chapter so as to better explore which virtues best suit which topic, which good might be expressed by or in spite of each field, and which cultivation practices will be required within each. In this way, Vallor highlights the real dangers of failing to skillfully adapt to the requirements of each of these unprecedented challenges.

While Vallor considers most every aspect of this project in great detail, there are points throughout the text where she seems to fall prey to some of the same technological pessimism, utopianism, or determinism for which she rightly calls out other thinkers, in earlier chapters. There is still a sense that these technologies are, of their nature, terrifying, and that all we can do is rein them in.

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

While the implications of climate catastrophes, dystopian police states, just-dumb-enough AI, and rampant gene hacking seem real, obvious, and avoidable to many of us, many others take them as merely naysaying distractions from the good of technosocial progress and the ever-innovating free market.[17] With that in mind, we need tools with which to begin the process of helping people understand why they ought to care about technomoral virtue, even when they have such large, driving incentives not to.

Without that, we are simply presenting people who would sell everything about us for another dollar with the tools by which to make a more cultivated, compassionate, and interrelational world, and hoping that enough of them understand the virtue of those tools, before it is too late. Technology and the Virtues is a fantastic schematic for a set of these tools.

Contact details: damienw7@vt.edu

References

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting New York: Oxford University Press, 2016.

[1] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a World Worth Wanting (New York: Oxford University Press, 2016) ,6.

[2] Ibid., 10.

[3] Ibid., 19—21.

[4] Ibid., 22—26.

[5] Ibid. 28.

[6] Ibid., 28—32.

[7] Ibid., 35.

[8] Ibid., 43.

[9] Ibid., 44.

[10] Ibid., 45—47.

[11] Ibid., 54—55.

[12] Ibid., 51.

[13] Ibid., 64.

[14] Ibid., 119.

[15] Ibid., 120.

[16] Ibid., 122—154.

[17] Ibid., 249—254.

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Robert Frodeman, University of North Texas, robert.frodeman@unt.edu

Frodeman, Robert. “The Politics of AI.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 48-49.

The pdf of the article provides specific page references. Shortlink: https://wp.me/p1Bfg0-3To

This robot, with its evocatively cute face, would turn its head toward the most prominent human face it could see.
Image from Jeena Paradies via Flickr / Creative Commons

 

New York Times columnist Thomas Friedman has been a cheerleader for technology for decades. He begins an early 2018 column by declaring that he wants to take a break from the wall-to-wall Trump commentary. Instead, ‘While You Were Sleeping’ consists of an account of the latest computer wizardry that’s occurring under our noses. What Friedman misses is that he is still writing about Trump after all.

His focus is on quantum computing. Friedman revisits a lab he had been to a mere two years earlier; on the earlier visit he had come away impressed, but feeling that “this was Star Wars stuff — a galaxy and many years far away.” To his surprise, however, the technology had moved quicker than anticipated: “clearly quantum computing has gone from science fiction to nonfiction faster than most anyone expected.”

Friedman hears that quantum computers will work 100,000 times faster than the fastest computers today, and will be able to solve unimaginably complex problems. Wonders await – such as the NSA’s ability to crack the hardest encryption codes. Not that there is any reason for us to worry about that; the NSA has our best interests at heart. And in any case, the Chinese are working on quantum computing, too.

Friedman does note that this increase in computing power will lead to the supplanting of “middle-skill and even high-skill work.” Which he allows could pose a problem. Fortunately, there is a solution at hand: education! Our educational system simply needs to adapt to the imperatives of technology. This means not only K-12 education, and community colleges and universities, but also lifelong worker training. Friedman reports on an interview with IBM CEO Ginni Rometty, who told him:

“Every job will require some technology, and therefore we’ll need to revamp education. The K-12 curriculum is obvious, but it’s the adult retraining — lifelong learning systems — that will be even more important…. Some jobs will be displaced, but 100 percent of jobs will be augmented by AI.”

Rometty notes that technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”

For that’s how it works: people adapt to technology, rather than the other way around. And what if our job gets outsourced or taken over by a machine? Friedman then turns to education-to-work expert Heather McGowan: workers “must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential.” Education must become “a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill.” It all sounds rather rigorous, frog-marched into the future for our own good.

Which should have brought Friedman back to Trump. Friedman and Rometty and McGowan are failing to connect the results of the last election. Clinton lost the crucial states of Pennsylvania, Wisconsin, and Michigan by a total of 80,000 votes. Clinton lost these states in large part because of the disaffection of white, non-college educated voters, people who have been hurt by previous technological development, who are angry about being marginalized by the ‘system’, and who pine for the good old days, when America was Great and they had a decent paycheck. Of course, Clinton knew all this, which is why her platform, Friedman-like, proposed a whole series of worker re-education programs. But somehow the coal miners were not interested in becoming computer programmers or dental hygienists. They preferred to remain coal miners – or actually, not coal miners. And Trump rode their anger to the White House.

Commentators like Friedman might usefully spend some of their time speculating on how our politics will be affected as worker displacement moves up the socio-economic scale.

At root, Friedman and his cohorts remain children of the Enlightenment: universal education remains the solution to the political problems caused by run-amok technological advance. This, however, assumes that ‘all men are created equal’ – and not only in their ability, but also in their willingness to become educated, and then reeducated again, and once again. They do not seem to have considered the possibility that a sizeable minority of Americans—or any other nationality—will remain resistant to constant epistemic revolution, and that rather than engaging in ‘lifelong learning’ are likely to channel their displacement by artificial intelligence into angry, reactionary politics.

And as AI ascends the skills level, the number of the politically roused is likely to increase, helped along by the demagogue’s traditional arts, now married to the focus-group phrases of Frank Luntz. Perhaps the machinations of turning ‘estate tax’ into ‘death tax’ won’t fool the more sophisticated. It’s an experiment that we are running now, with a middle-class tax cut just passed by Congress, but which diminishes each year until it turns into a tax increase in a few years. But how many will notice the latest scam?

The problem, however, is that even if those of us who live in non-shithole countries manage to get with the educational program, that still leaves “countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S [sic] now being automated.” A large cohort of angry, displaced young men ripe for apocalyptic recruitment. I wonder what Friedman’s solution is to that.

The point that no one seems willing to raise is whether it might be time to question the cultural imperative of constant innovation.

Contact details: robert.frodeman@unt.edu

References

Friedman, Thomas. “While You Were Sleeping.” New York Times. 16 January 2018. Retrieved from https://www.nytimes.com/2018/01/16/opinion/while-you-were-sleeping.html

Author Information: Emma Stamm, Virginia Tech, stamm@vt.edu

Stamm, Emma. “Retooling ‘The Human.’” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 36-40.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3SW

Ashley Shew’s Animal Constructions and Technical Knowledge challenges philosophers of technology with the following provocation: What would happen if we included tools made and used by nonhuman animals in our broad definition of “technology?”

Throughout Animal Constructions, Shew makes the case that this is more than simply an interesting question. It is, she says, a necessary interrogation within a field that may well be suffering from a sort of speciesist myopia. Blending accounts from a range of animal case studies — including primates, cetaceans, crows, and more — with pragmatic theoretical analysis, Shew demonstrates that examining animal constructions through a philosophical lens not only expands our awareness of the nonhuman world, but has implications for how humans should conceive of their own relationship with technology.

At the beginning of Animal Constructions, Shew presents us with “the human clause,” her assessment of “the idea that human beings are the only creatures that can have or do use technology” (14). This misconception stems from the notion of homo faber, “(hu)man the maker” (14), which “sits at the center of many definitions of technology… (and) is apparent in many texts theorizing technology” (14).

It would appear that this precondition for technology, long taken as dogma by technologists and philosophers alike, is less stable than has often been assumed. Placing influential ideas from philosophers of technology in dialogue with empirical field and (to a lesser extent) laboratory studies conducted on animals, Shew argues that any thorough philosophical account of technology not only might, but must include objects made and used by nonhuman animals.

Animal Constructions and Technical Knowledge lucidly demonstrates this: by the conclusion, readers may wonder how the intricate ecosystem of animal tool-use has been so systematically excluded from philosophical treatments of the technical. Shew has accomplished much in recasting a disciplinary norm as a glaring oversight — although this oversight may be forgivable, considering the skill set required to achieve its goals. The author’s ambitions demand not only fluency with interdisciplinary research methods, but acute sensitivity to each of the disciplines it mobilizes.

Animal Constructions is a philosophical text wholly committed to representing science and technology on their own terms while speaking to a primarily humanities-based audience, a balance its author strikes gracefully. Indeed, Shew’s transitions from the purely descriptive to the interpretive are, for the most part, seamless. For example, in her chapter on cetaceans, she examines the case of dolphins trained to identify man-made objects of a certain size category (60), noting that the success of this initiative indicates that dolphins have the human-like capacity to think in abstract categories. This interpretation feels natural and very reasonable.

Importantly, the studies selected are neither conceptually simple, nor do they appear cherry-picked to serve her argument. A chapter titled “Spiderwebs, Beaver Dams, and Other Contrast Cases” (91) explores research on animal constructions that do not entirely fit the author’s definitions of technology. Here, it is revealed that while this topic is necessarily complicated for techno-philosophers, these complexities do not foreclose the potential for the nonhuman world to provide humans with a greater awareness of technology in theory and practice.

Ambiguous Interpretations

That being said, in certain parts, the empirical observations Shew uses to make her argument seem questionable. In a chapter on ape and primate cases, readers are given the tale of Santino, a chimpanzee in a Switzerland zoo with the pesky habit of storing stones specifically to throw at visitors (40). Investigators declared this behavior “the first unambiguous evidence of forward-planning in a nonhuman animal” (40) — a claim that may seem spurious, since many of us have witnessed dogs burying bones to dig up in the future, or squirrels storing food for winter.

However, as with every case study in the book, the story of Santino comes from well-documented, formal research, none of which was conducted by the author herself. If it was discovered that factual knowledge such as the aforementioned are, in fact, erroneous, it is not a flaw of the book itself. Moreover, so many examples are used that the larger arguments of Animal Constructions will hold up even if parts of the science on which it relies comes to be revised.

In making the case for animals so completely, Animal Constructions and Technical Knowledge is a success. The book also makes a substantial contribution with the methodological frameworks it gives to those interested in extending its project. Animal Constructions is as much conceptual cartography as it is a work of persuasion: Shew not only orients readers to her discipline — she does not assume readerly familiarity with its academic heritage — but provides a map that philosophers may use to situate the nonhuman in their own reflection on technology. This is largely why Animal Constructions is such a notable text for 21st century philosophy, as so many scholars are committed to rethinking “the human” in the wake of recent innovations in technoscience.

Animal Knowledge

Animal Constructions is of particular interest to critical and social epistemologists. Its opening chapters introduce a handful of ideas about what defines technical knowledge, concepts that bear on the author’s assessment of animal activity. Historically, Shew writes, philosophers of technology have furnished us with two types of accounts of technical knowledge. The first sees technology as constituting a unique case for philosophers (3).

In this view, the philosophical concerns of technology cannot be reduced to those of science (or, indeed, any domain of knowledge to which technology is frequently seen as subordinate). “This strain of thought represents a negative reaction to the idea that philosophy is the handmaiden of science, that technology is simply ‘applied science,’” she writes (3). It is a line of reasoning that relies on a careful distinction between “knowing how” and “knowing that,” claiming that technological knowledge is, principally, skillfulness in the first: know-how, or knowledge about “making or doing something” (3) as opposed to the latter “textbook”-ish knowledge. Here, philosophy of technology is demarcated from philosophy of science in that it exists outside the realm of theoretical epistemologies, i.e., knowledge bodies that have been abstracted from contextual application.

If “know-how” is indeed the foundation for a pragmatic philosophy of technology, the discipline would seem to openly embrace animal tools and constructions in its scope. After all, animals clearly “know how” to engage the material world. However, as Shew points out, most technology philosophers who abide by this dictum in fact lean heavily on the human clause. “This first type of account nearly universally insists that human beings are the sole possessors of technical knowledge” (4), she says, referencing the work  of philosophers A. Rupert Hall, Edwin T. Layton, Walter Vincenti, Carl Mitcham, and Joseph C. Pitt (3) as evidence.

The human clause is also present in the second account, although it is not nearly so deterministic. This camp has roots in the philosophy of science (6) and “sees knowledge as embodied in the objects themselves” (6). Here, Shew draws from the theorizations of Davis Baird, whose concept “thing knowledge” — “knowledge that is encapsulated in devices or otherwise materially instantiated” (6) — recurs throughout the book’s chapters specifically devoted to animal studies (chapters 4, 5, 6 and 7).

Scientific instruments are offered as perhaps the most exemplary cases of “thing knowledge,” but specialized tools made by humans are far from the only knowledge-bearing objects. The parameters of “thing knowledge” allow for more generous interpretations: Shew offers that Baird’s ideas include “know-how that is demonstrated or instantiated by the construction of a device that can be used by people or creatures without the advanced knowledge of its creators” (6). This is a wide category indeed, one that can certainly accommodate animal artefacts.

Image from Sergey Rodovnichenko via Flickr / Creative Commons

 

The author adapts this understanding of thing-knowledge, along with Davis Baird’s five general ideals for knowledge — detachment, efficacy, longevity, connection and objectivity (6) — as a scale within which some artefacts made and used by animals may be thought as “technologies” and others not. Positioned against “know-how,” “thing knowledge” serves as the other axis for this framework (112-113). Equally considered is the question of whether animals can set intentions and engage in purpose-driven behavior. Shew suggests that animal constructions which result from responses to stimuli, instinctive behavior, or other byproducts of evolutionary processes may not count as technology in the same way that artefacts which seem to come from purposiveness and forward-planning would (6-7).

Noting that intentionality is a tenuous issue in animal studies (because we can’t interview animals about their reasons for making and using things), Shew indicates that observations on intentionality can, at least in part, be inferred by exploring related areas, including “technology products that encode knowledge,” “problem-solving,” and “innovation” (9). These characteristics are taken up throughout each case study, albeit in different ways and to different ends.

At its core, the manner in which Animal Constructions grapples with animal cognition as a precursor to animal technology is an epistemological inquiry into the nonhuman. In the midst of revealing her aims, Shew writes: “this requires me to address questions about animal minds — whether animals set intentions and how intentionality evolved, whether animals are able to innovate, whether they can problem solve, how they learn — as well as questions about what constitutes technology and what constitutes knowledge” (9). Her answer to the animal-specific queries is a clear “yes,” although this yes comes with multiple caveats.

Throughout the text, Shew notes the propensity of research and observation to alter objects under study, clarifying that our understanding of animals is always filtered through a human lens. With a nod to Thomas Nagel’s famous essay “What Is It Like To Be A Bat?” (34), she maintains that we do not, in fact, know what it is like to be a chimpanzee, crow, spider or beaver. However, much more important to her project is the possibility that caution around perceived categorical differences, often foregrounded in the name of scholarly self-reflexivity, can hold back understanding of the nonhuman.

“In our fear of anthropomorphization and desire for a sparkle of objectivity, we can move too far in the other direction, viewing human beings as removed from the larger animal kingdom,” she declares (16).

Emphasizing kinship and closeness over remoteness and detachment, Shew’s pointed proclamations about animal life rest on the overarching “yes:” yes, animals solve problems, innovate, and set intentions. They also transmit knowledge culturally and socially. Weaving these observations together, Shew suggests that our anthropocentrism represents a form of bias (108); as with all biases, it stifles discourse and knowledge production for the fields within which it is imbricated — here, technological knowledge.

While this work explicitly pertains to technology, the lingering question of “what constitutes knowledge overall?” does not vanish in the details. Shew’s take on what constitutes animal knowledge has immediate relevance to work on knowledge made and manipulated by nonhumans. By the book’s end, it is evident that animal research can help us unhinge “the human clause” from our epistemology of the technical, facilitating a radical reinvestigation of both tool use and materially embodied knowledge.

Breaking Down Boundaries

But its approach has implications for taxonomies that not only divide humans and animals, but humans, animals and entities outside of the animal kingdom.  Although it is beyond the scope of this text, the methods of Animal Constructions can easily be applied to digital “minds” and artificial general intelligence, along with plant and fungus life. (One can imagine a smooth transition from a discussion on spider web-spinning, p. 92, to the casting of spores by algae and mushrooms). In that it excavates taxonomies and affirms the violence done by categorical delineations, Animal Constructions bears surface resemblance to the work of Michel Foucault and Donna Haraway. However, its commitment to positive knowledge places it in a tradition that more boldly supports the possibilities of knowing than does the legacies of Foucault and Haraway. That is to say, the offerings of Animal Constructions are not designed to self-deconstruct, or ironically self-reflect.

In its investigation of the flaws of anthropocentrism, Animal Constructions implies a deceptively straightforward question: what work does “the human clause” do for us? —  in other words, what has led “the human” to become so inexorably central to our technological and philosophical consciousness? Shew does not address this head-on, but she does give readers plenty of material to begin answering it for themselves. And perhaps they should: while the text resists ethical statements, there is an ethos to this particular question.

Applied at the societal level, an investigation of the roots of “the human clause” could be leveraged toward democratic ends. If we do, in  fact, include tools made and used by nonhuman animals in our definition of technology, it may mar the popular image of technological knowledge as a sort of “magic” or erudite specialization only accessible to certain types of minds. There is clear potential for this epistemological position to be advanced in the name of social inclusivity.

Whether or not readers detect a social project among the conversations engaged by Animal Constructions, its relevance to future studies is undeniable. The maps provided by Animal Constructions and Technical Knowledge do not tell readers where to go, but will certainly come in useful for anybody exploring the nonhuman territories of 21st century. Indeed, Animal Construction and Technical Knowledge is not only a substantive offering to philosophy of technology, but a set of tools whose true power may only be revealed in time.

Contact details: stamm@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

Technoprogressive Declaration

SERRC —  November 23, 2014 — 5 Comments

Editor’s Note: We thank the Institute for Ethics and Emerging Technologies and the members of the Technoprogressive Caucus at Transvision 2014 (21 November), in Paris, for allowing us to repost the Technoprogressive Declaration. The caucus invites individual and organizational co-signators between now and the end of the year. The SERRC invites comment, below, from any and all readers. The SERRC will reply over the coming month.

iomo_futurism

Image credit: Daniela Goulart, via flickr

Technoprogressive Declaration

The world is unacceptably unequal and dangerous. Emerging technologies could make things dramatically better or worse. Unfortunately too few people yet understand the dimensions of both the threats and rewards that humanity faces. It is time for technoprogressives, transhumanists and futurists to step up our political engagement and attempt to influence the course of events.

Our core commitment is that both technological progress and democracy are required for the ongoing emancipation of humanity from its constraints. Partisans of the promises of the Enlightenment, we have many cousins in other movements for freedom and social justice. We must build solidarity with these movements, even as we intervene to point to the radical possibilities of technologies that they often ignore. With our fellow futurists and transhumanists we must intervene to insist that technologies are well-regulated and made universally accessible in strong and just societies. Technology could exacerbate inequality and catastrophic risks in the coming decades, or especially if democratized and well-regulated, ensure longer, healthy and more enabled lives for growing numbers of people, and a stronger and more secure civilization.

Beginning with our shared commitment to individual self-determination we can build solidarity with

  • Organizations defending workers and the unemployed, as technology transforms work and the economy
  • The movement for reproductive rights, around access to contraception, abortion, assisted reproduction and genomic choice
  • The movement for drug law reform around the defense of cognitive liberty
  • The disability rights movement around access to assistive and curative technologies
  • Sexual and gender minorities around the right to bodily self-determination
  • Digital rights movements around new freedoms and means of expression and organization

We call for dramatically expanded governmental research into anti-aging therapies, and universal access to those therapies as they are developed in order to make much longer and healthier lives accessible to everybody. We believe that there is no distinction between “therapies” and “enhancement.”  The regulation of drugs and devices needs reform to speed their approval.

As artificial intelligence, robotics and other technologies increasingly destroy more jobs than they create, and senior citizens live longer, we must join in calling for a radical reform of the economic system. All persons should be liberated from the necessity of the toil of work. Every human being should be guaranteed an income, healthcare, and life-long access to education.

We must join in working for the expansion of rights to all persons, human or not.

We must join with movements working to reduce existential risks, educating them about emerging threats they don’t yet take seriously, and proposing ways that emerging technologies can help reduce those risks. Transnational cooperation can meet the man-made and natural threats that we face.

It is time for technoprogressives to step forward and work together for a brighter future.