Archives For transhumanism

Author Information:David C. Winyard, Mount Vernon Nazarene University, winyard.david@gmail.com

Winyard, David C. “The Promethean Escape.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 1-3.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Kd

Image credit: The New Atlantis

Eclipse of Man: Human Extinction and the Meaning of Progress
Charles T. Rubin
New Atlantis Books, 2014
186 pp.

People obsessed with novel big ideas can find a lot to like in the transhumanism movement. It imagines technoscience solutions to all manner of worldly ills, and technical challenges are of little concern to transhumanists. Things that are too difficult now (e.g., immortality) will, the thinking goes, inevitably yield to super-intelligent artificial minds that will emerge after Ray Kurzweil’s Singularity.

Charles T. Rubin’s Eclipse of Man is a big picture analysis and critique of transhumanism’s big ideas. It examines how enlightenment conceptions of progress have given way to visions of a dehumanized future. In clear and artful ways, Rubin exposes the movement’s unwarranted assumptions. He concludes that transhumanism’s long-term consequences are incomprehensible, and as a consequence, they are not worthy of rational pursuit. Anyone interested in transhumanism—critics and advocates alike—may benefit from considering both the logic and art of Rubin’s arguments.

The Moral Vision of Transhumanism

To start, Rubin justifies his broad-brush look of transhumanism, with its assorted religious persuasions: atheist, Mormon, Christian, emergent, and many more. Although they differ, Rubin notes that they “often aim to adopt a ‘big tent’ outlook that seeks to minimize the sectarian differences implicit in the different designations.” By focusing on central themes, he does not get bogged down in factional details but presses on toward his goal: to examine “transhumanism’s moral vision of the future.”

Rubin’s begins his story of dehumanization with the Marquis de Condorcet (1743–1794). Condorcet’s enlightenment rationalism focused exclusively on improving the lot of human society. Like Francis Bacon before, and transhumanists today, Condorcet thought human reason could greatly extend lifespans, but he did not believe immortality could be attained. He did hope that “our power over nature will soften the hard edges of the human condition by improving the material conditions of life,” and this would improve “moral conditions.” Generally, Condorcet’s vision of the future is uncontroversial. Who would not want better and longer lives?

Next, Rubin considers Condorcet’s progress in view of Thomas Robert Malthus (1766–1834) and Charles Darwin (1809–1882). Malthus held that “our future holds great misery and scarcity” because “finite resources limit what human beings can ever hope to accomplish,” but Darwin spins “natural competition as a force for change over time.” According to Rubin, today transhumanists attempt to “reconcile and assimilate these ideas by advocating the end of humanity.” Is Rubin right? Are enlightenment humanism and evolutionary metanarratives leading toward dehumanization?

Transhuman or Inhuman?

Rubin considers the descent into dehumanization to begin with William Winwood Reade (1838–1875), and continues with Nikolai Federovich Federov (1829–1903), Nicolas Camille Flammarion (1842–1925), J.B.S. Haldane (1892–1964), and finally J.D. Bernal (1901–1971). They increasingly diverge from Condorcet’s humanism, and in the end, the goal of progress is redefined: from “better humans” to a paradoxical move “beyond humanity.” Rubin mourns that society is embracing this vision, so “the eclipse of man is underway.”

Through the next three chapters, Rubin considers recent developments. Chapter Two examines the ongoing Search for Extra-Terrestrial Intelligence (SETI), including a review of its associated science and science fiction. Chapter Three focuses on nanotechnology, comparing Eric Drexler’s nanotechnology visions with Neal Stevenson’s novel The Diamond Age.

In the process, inconsistencies emerge between nanotechnology’s promise and its potential to disrupt human lives and relationships. Chapter Four looks to other transhumanist aspirations and their mysteries. The upshot is that logically transhumanism’s assumptions lead to a dehumanized future. Why? Galactic evolutionary competition requires human beings to evolve into technological artifacts. Can we not hope for something better?

Unfortunately, Rubin’s powerful analysis and critique falls flat in Chapter Five, entitled “The Real Meaning of Progress.” It can be summarized in five words: there are no easy answers. Instead of solutions, Rubin points toward attitudes (e.g., humility) that can help society deepen its understanding of what human life means, and also what can be lost by forging ahead without maintaining continuous attention to such matters.

Spectres of Icarus

Rubin frames his conclusions by interpreting three paintings of Icarus. The first shows him being launched into the sky by his father. In the second, Icarus realizes the consequences of his disobedience and is struck by terror. The last shows Icarus crashing into the sea, even as common folk go about their business nearby. The paintings effectively illustrate Rubin’s argument. Reality exists between optimistic and pessimistic views of the future, and the meaning of life today must shape the future. Ignoring or diminishing human life as we know it today will surely take us in wrong directions.

Rubin teaches political science, but he does not offer politics as the answer to transhumanism’s challenges. He notes that transhumanism is often sold in techno-libertarian terms, but this may be to distinguish it from eugenics. Freedom of choice is promised, but anyone not adopting technological enhancements would be left behind or forcibly eliminated. The force of evolution cannot be resisted.

By not reducing transhumanism to politics, Rubin differs from Steve Fuller. His transhumanism interests began with history and theology, but he has ended up settling for risky political solutions (e.g., the rehabilitation of eugenics) that can only have near-term effects. Surely, Rubin’s long view of the big picture befits transhumanism’s grand narrative. Perhaps his insights will, in Rubin’s terms, help transhumanism overcome its “peculiar farsightedness?”

Author Information: Lyudmila Markova, Russian Academy of Science, markova.lyudmila2013@yandex.ru

Markova, Lyudmila. “Transhumanism in the Context of Social Epistemology.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 50-53.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3EQ

Please refer to:

Image credit: Ingmar Zahorsky, via flickr

Robert Frodeman (2015) and Steve Fuller (2017) discuss the problem of transhumanism in the context of history. This approach to transhumanism—a subject now studied actively by many specialists—clearly shows us the focus of current investigations. A virtual world, created by humans and possessing intelligence, demands a special kind of communication. But we are not the only ones changing. Humans programmed robots to perform calculations at a speed we cannot match.

Still, we remain certain that robots will never be able to feel a sense of joy or disappointment, of love or hatred. At the same time, it is difficult to deny that robots already know how to express emotions. Often, we prefer to deal with robots—they are generally polite and answer our questions quickly and clearly.

Even though we are unable to produce particular operations at a certain speed, we still use the results of the robots’ work. But the senses remain inaccessible to robots. Yet, they have learned many of the ways humans make their feelings known to others. A common ground is forming among humans and robots. People use techniques for communication with one another and with artificial intellects based on the laws of the virtual world, and artificial intellects use signs of human feelings—which are only signs and nothing more.

As we debate transhumanism as something that either awaits us, or does not, in the future we fail to consider the serious transformations of our current lives. To some extent, we are already trans-humanoids with artificial parts of our body, with the ability to change our genome, with cyber technology, with digital communication and so on. We do not consider these changes as radically transforming our future selves and our current selves. We do not notice to what extent we differ already from our children and, consequently, from the next generation.

Until recently, we relied first on our knowledge of the laws of the material world. We studied nature and our artificial world was material. Now, we study our thinking and our artificial world is not simply material—it can think and it can understand us. Its materiality, then, is of a different type. The situation in society is quite another and, in order to live in it, we have to change ourselves. Perhaps we have not noticed that the process of our becoming transhumanoids has already begun?

We have a philosophical basis for the discussions about transhumanism. It is social epistemology, where some borders disappear and others appear. Steve Fuller frequently refers to the topic of transhumanism in the context of social epistemology.

“Sociality” in Social Epistemology: The Turn in Thinking

As we speak of both the technization of humans and humanization of machines, the border between humans and technology becomes less visible. In social epistemology, the sense of “social” is important for understanding this turn in thinking during the last century. You can find without difficulty (long before the emergence of social epistemology) the adoption of phrases such as the “social” history of science, the “social” organization of scientific institutes, the “social” character of the scientific (and not only scientific) knowledge, “social” character of the work of a scientist and so on. People created science and everything associated with it is connected to our world in one way or another.

Nobody denies the existence of these relations. The problem resides in their interpretation. Even if you want to see the advantage of your position in striving to eliminate traces of the scientists’ work and conditions under which the results were obtained, you have to know what you want to eliminate and why. In social epistemology, on the contrary, sociality remains in scientific knowledge. Still, serious problems follow as a result.

It is important to understand that anything we study acquires human features because we introduce them into it. We comprehend nature (in the broadest sense of this word) not as something opposed, or even hostile, to people. We deal with a thinking world. For example, we want to have a house that protects us from rain and cold. It is enough to know physical characteristics of materials in order to build such a house. But now we can have a “smart” house. This house alerts you when you return home in the evening that there is no kefir in the fridge and the cat needs food you must buy. You like your new car, but you want to have a navigator. We now have driverless cars. And drones are widely used for military and economic purposes. I have listed just a very few cases when robots help us in our daily lives. We are built into this world and we are accustomed to it.

Still, electronics can complicate and hinder our lives. For instance, you drive the most recent Mercedes model. Your car automatically brakes if you follow too closely and your steering wheel turns in an unexpected way. At the same time, if you drive an old car without any electronic equipment, you feel in control of the situation. The behavior of the machine depends entirely on your actions.

Classical and Non-Classical Logic

Thinking in the context of social epistemology is plugged into empirical reality. This fact is considered usually as an abandonment of logic. But this is not so. The fact is that classical logic has exhausted itself. A new logic, radically different from the classical one, is just emerging. What is the difference?

David Hume, one of the founders of the classical philosophy, wrote about the British and French. They are different people, of course, but philosophically they have a common feature—they are humans. Take another example. You are talking to the same person in different situations. In the office, this person is not the same as they are at home or in the street. As a rule, it is not important to you that you deal every time with the same person, this is obvious without any justification. The person is interesting from the point of view of their characteristics as a member of work team or as a family member. Every person manifests themselves in a specific way in a concrete situation. And this fact is taken into account in a new type of logic. This logic is rooted in specific frameworks.

We can see the attention to specific sociality in the formation of social epistemology. It is necessary to understand, in Fuller’s opinion, why scientists receive different results when they generally have the same set of books, the same knowledge, and the same conditions of work. Fuller pays attention to what surrounds the scientist here and now and not in the past. The history and the process of scientific knowledge development is understood by us with our logical means. As a result, they inevitably become some part of our present.

The notion of space becomes more important than the notion of time. Gilles Deleuze wrote about this in his logic. Robert Frodeman identifies his approach as “field philosophy”. This name identifies features of our current thinking. Russian philosopher Merab Mamardashvilly thought that in order to understand emerging scientific knowledge it is necessary that it be considered outside the “arrow of time”.

The former connection between the past and future, in order to deduce a new result from previous knowledge. is not suitable. In the last century, dialog became more widespread. Its logical justification in science was given in the scientific revolution of the beginning of 20th century physics. For us, it is important to notice that quantum mechanics replaced classical physics on the front lines of the development of science. But classical physics was not destroyed and its proponents continue to work and give society useful results. This feature of the non-classical scientific logic is noteworthy: it does not declare its predecessor as not scientific, as not having the ability to decide corresponding problems. Moreover, this new logic needs its predecessor and dialogical communication with it. In the course of this dialog both sides change, trying to improve their positions in the same when two people talk.

That is why I do not agree with <a href="http://” target=”_blank”>Justin Cruickshank (2015) when he writes that Karl Popper’s idea of a fallibilism is connected in some way with dialog. For Popper, the main aim is to criticize and, in the end, to destroy to falsify a theory, in order replace it with a new theory. As a result, dialog becomes impossible because for it we need to have at least two interlocutors or theories. For Popper, an ideal situation is when we deal with one person, a winner. In Russia, the topic of dialog was studied by Mikhail Bakhtin and Vladimir Bibler.

Context

Dialog is one of the forms of communication between different events in the history. If we consider, as an ideal, all studied events from the point of view of their common characteristics, we then deal with one person and we have nobody for dialog. The differing conditions of a scientist’s, or any other person’s, work is not taken into consideration. We have classical thinking—one subject, one object, one logic.

As I understand Ilya Kasavin (2017), he does not investigate the construction of Kara-Kum Canal as an inference from the Peter the Great’s plan. A connection exists between these two projects. Yet, each of them is considered as unique, as having its own context. So, it is not correct to ask Kasavin: “What traces and records were left of the project imagined by Peter the Great, how were they interpreted and reinterpreted over the course of hundreds the years, and how, if at all, did they influence Stalin’s project?” (Bakhurst and Sismondo, 2017). The “arrow of time” as a coherent chain of events from Peter the Great to Stalin exists. But it is not important in the frame of non-classical thinking to study first, and in all detail, this chain for the understanding the situation with the construction of Kara-Kum Canal.

The same may be said about the emergence of transhumanism as a scientific area. It is created in the context that is formed from the outside world by choosing those elements which would be able to help us to comprehend some problem. One of the most important features of the context is the presence of both ideal elements (the past scientific knowledge, for instance), and the material elements in world existing around us. Context, as a whole, is the beginning of a new result when we think, and it is not surprising that we have a notion of transhumanism containing the ability of thinking and material carrier of a thought. Robotics corresponds to this understanding of transhumanism and that helps us to see the border between human and robot as less defined.

Conclusion

We see the current signs of human transformation which were seemingly impossible just a few decades ago. Even those who are against such changes do not object to them when they seek medical help or when they have the opportunity to facilitate their everyday life. In many cases, then, radical changes go against our will and we do not protest against them.

We are creating our artificial world on the basis of the knowledge not only of the material world, but also of our thinking. We put this knowledge into the surrounding world in the process of investigation, and we cannot imagine it without the ability to think. The world is becoming able to think, to understand us, to answer our questions.

As our thinking becomes different, we notice its turn. It is directed not at nature, at the world around us, but at humans. At the same time, nature acquires certain human characteristics. This turn is the basis of many serious problems connected initially with notions of the truth and objectivity of scientific knowledge. But these problems are not the topic of this comment.

References  

Bakhurst, David and Sergio Sismondo. “Commentary on Ilya Kasavin’s ‘Towards a Social Philosophy of Science: Russian Prospects’.” Social Epistemology Review and Reply Collective 6, no. 4 (2017): 20-23.

Cruickshank, Justin. “Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practices.” Social Epistemology 29, no. 1 (2015): 73-94.

Frodeman, Robert. “Anti-Fuller: Transhumanism and the Proactionary Imperative.” Social Epistemology Review and Reply Collective 4, no. 4 (2015): 38-43.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Kasavin, Ilya. “Towards a Social Philosophy of Science: Russian Prospects.” Social Epistemology 31, no. 1 (2017): 1-15.

Author Information: Ben Ross, University of North Texas, benjamin.ross@my.unt.edu

Ross, Ben. “Between Poison and Remedy: Transhumanism as Pharmakon.Social Epistemology Review and Reply Collective 6, no. 5 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3zU

Please refer to:

Image credit: Jennifer Boyer, via flickr

As a Millennial, I have the luxury of being able to ask in all seriousness, “Will I be the first generation safe from death by old age?” While the prospects of answering in the affirmative may be dim, they are not preposterous. The idea that such a question can even be asked with sincerity, however, testifies to transhumanism’s reach into the cultural imagination.

But what is transhumanism? Until now, we have failed to answer in the appropriate way, remaining content to describe its possible technological manifestations or trace its historical development. Therefore, I would like to propose an ontology of transhumanism. When philosophers speak of ontologies, they are asking a basic question about the being of a thing—what is its essence? I suggest that transhumanism is best understood as a pharmakon.

Transhumanism as a Pharmakon

Derrida points out in his essay “Plato’s Pharmacy” that while pharmakon can be translated as “drug,” it means both “remedy” and “poison.” It is an ambiguous in-between, containing opposite definitions that can both be true depending on the context. As Michael Rinella notes, hemlock, most famous for being the poison that killed Socrates, when taken in smaller doses induces “delirium and excitement on the one hand,” yet it can be “a powerful sedative on the other” (160). Rinella also goes on to say that there are more than two meanings to the term. While the word was used to denote a drug, Plato “used pharmakon to mean a host of other things, such as pictorial color, painter’s pigment, cosmetic application, perfume, magical talisman, and recreational intoxicant.” Nevertheless, Rinella makes the crucial remark that “One pharmakon might be prescribed as a remedy for another pharmakon, in an attempt to restore to its previous state an identity effaced when intoxicant turned toxic” (237-238). It is precisely this “two-in-one” aspect of the application of a pharmakon that reveals it to be the essence of transhumanism; it can be both poison and remedy.

To further this analysis, consider “super longevity,” which is the subset of transhumanism concerned with avoiding death. As Harari writes in Homo Deus, “Modern science and modern culture…don’t think of death as a metaphysical mystery…for modern people death is a technical problem that we can and should solve.” After all, he declares, “Humans always die due to some technical glitch” (22). These technical glitches, i.e. when one’s heart ceases to pump blood, are the bane of researchers like Aubrey de Grey, and fixing them forms the focus of his “Strategies for Engineered Negligible Senescence.” There is nothing in de Grey’s approach to suggest that there is any human technical problem that does not potentially have a human technical solution. Grey’s techno-optimism represents the “remedy-aspect” of transhumanism as a view in which any problems—even those caused by technology—can be solved by technology.

As a “remedy,” transhumanism is based on a faith in technological progress, despite such progress being uneven, with beneficial effects that are not immediately apparent. For example, even if de Grey’s research does not result in the “cure” for death, his insight into anti-aging techniques and the resulting applications still have the potential to improve a person’s quality of life. This reflects Max More’s definition of transhumanism as “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (3).

Importantly, More’s definition emphasizes transcendent enhancement, and it is this desire to be “upgraded” which distinguishes transhumanism. An illustration of the emergence of the upgrade mentality can be seen in the history of plastic surgery. Harari writes that while modern plastic surgery was born during the First World War as a treatment to repair facial injuries, upon the war’s end, surgeons found that the same techniques could be applied not to damaged noses, but to “ugly” ones, and “though plastic surgery continued to help the sick and wounded…it devoted increasing attention to upgrading the healthy” (52). Through its secondary use as an elective surgery of enhancement rather than exclusively as a technique for healing, one can see an example of the evolution of transhumanist philosophy out of medical philosophy—if the technology exists to change one’s face (and they have they money for it), a person should be morphologically free to take advantage of the enhancing capabilities of such a procedure.

However, to take a view of a person only as “waiting to be upgraded” marks the genesis of the “poison-aspect” of transhumanism as a pharmakon. One need not look farther than Martin Heidegger to find an account of this danger. In his 1954 essay, “The Question Concerning Technology,” Heidegger suggests that the threat of technology is ge-stell, or “enframing,” the way in which technology reveals the world to us primarily as a stock of resources to be manipulated. For him, the “threat” is not a technical problem for which there is a technical solution, but rather it is an ontological condition from which we can be saved—a condition which prevents us from seeing the world in any other way. Transhumanism in its “poison mode,” then, is the technological understanding of being—a singular way of viewing the world as a resource waiting to be enhanced. And what is problematic is that this way of revealing the world comes to dominate all others. In other words, the technological understanding of being comes to be the understanding of being.

However, a careful reading of Heidegger’s essay suggests that it is not a techno-pessimist’s manifesto. Technology has pearls concealed within its perils. Heidegger suggests as much when he quotes Hölderlin, “But where danger is, grows the saving power also” (333). Heidegger is asking the reader to avoid either/or dichotomous thinking about the essence of technology as something that is either dangerous or helpful, and instead to see it as a two-in-one. He goes to great lengths to point out that the “saving power” of technology, which is to say, of transhumanism, is that its essence is ambiguous—it is a pharmakon. Thus, the self-same instrumentalization that threatens to narrow our understanding of being also has the power to save us and force a consideration of new ways of being, and most importantly for Heidegger, new meanings of being.

Curing Death?

A transhumanist, and therefore pharmacological, take on Heidegger’s admonishment might be something as follows: In the future it is possible that a “cure” for death will threaten what we now know as death as a source of meaning in society—especially as it relates to a Christian heaven in which one yearns to spend an eternity, sans mortal coil. While the arrival of a death-cure will prove to be “poison” for a traditional understanding of Christianity, that same techno-humanistic artifact will simultaneously function as a “remedy,” spurring a Nietzschean transvaluation of values—that is, such a “cure” will arrive as a technological Zarathustra, forcing a confrontation with meaning, bringing news that “the human being is something that must be overcome” and urging us to ask anew, “what have you done to overcome him?” At the very least, as Steve Fuller recently pointed out in an interview, “transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection.” For those sympathetic to Leon Kass’ brand of repugnance, such suggestions are poison, and yet for a transhumanist such suggestions are a remedy to the glitch called death and the ways in which we relate to our finitude.

A more mundane example of the simultaneous danger and saving power of technology might be the much-hyped Google Glass—or in more transhuman terms, having Google Glass implanted into one’s eye sockets. While this procedure may conceal other ways of understanding the spaces and people surrounding the wearer other than through the medium of the lenses, the lenses simultaneously have the power to reveal entirely new layers of information about the world and connect the wearer to the environment and to others in new ways.

With these examples it is perhaps becoming clear that by re-casting the essence of transhumanism as a pharmakon instead of an either/or dichotomy of purely techno-optimistic panacea or purely techno-pessimistic miasma, a more inclusive picture of transhumanist ontology emerges. Transhumanism can be both—cause and cure, danger and savior, threat and opportunity. Max More’s analysis, too, has a pharmacological flavor in that transhumanism, though committed to improving the human condition, has no illusions that, “The same powerful technologies that can transform human nature for the better could also be used in ways that, intentionally or unintentionally, cause direct damage or more subtly undermine our lives” (4).

Perhaps, then, More might agree that as a pharmakon, transhumanism is a Schrödinger’s cat always in a state of superposition—both alive and dead in the box. In the Copenhagen interpretation, a system stops being in a superposition of states and becomes either one or the other when an observation takes place. Transhumanism, too, is observer-dependent. For Ray Kurzweil, looking in the box, the cat is always alive with the techno-optimistic possibility of download into silicon and the singularity is near. For Ted Kaczynski, the cat is always dead, and it is worth killing in order to prevent its resurrection. Therefore, what the foregoing analysis suggests is that transhumanism is a drug—it is both remedy and poison—with the power to cure or the power to kill depending on who takes it. If the essence of transhumanism is elusive, it is precisely because it is a pharmakon cutting across categories ordinarily seen as mutually exclusive, forcing an ontological quest to conceptualize the in-between.

References

Derrida, Jacques. “Plato’s Pharmacy.” In Dissemination, translated by Barbara Johnson, 63-171. Chicago: University of Chicago Press, 1981.

Fuller, Steve. “Twelve Questions on Transhumanism’s Place in the Western Philosophical Tradition.” Social Epistemology Review and Reply Collective, 19 April 2017. http://wp.me/p1Bfg0-3yl.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. HarperCollins, 2017.

Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell. Harper & Row, 1977.

More, Max. “The Philosophy of Transhumanism,” In The Transhumanist Reader, edited by Max More and Natasha Vita-More, 3-17. Malden, MA: Wiley-Blackwell, 2013.

Rinella, Michael A. Pharmakon: Plato, Drug Culture, and Identity in Ancient Athens. Lanham, MD: Lexington Books, 2010.

The following are a set of questions concerning the place of transhumanism in the Western philosophical tradition that Robert Frodeman’s Philosophy 5250 class at the University of North Texas posed to Steve Fuller, who met with the class via Skype on 11 April 2017.

Shortlink: http://wp.me/p1Bfg0-3yl

Image credit: Joan Sorolla, via flickr

1. First a point of clarification: we should understand you not as a health span increaser, but rather as interested in infinity, or in some sense in man becoming a god? That is, H+ is a theological rather than practical question for you?

Yes, that’s right. I differ from most transhumanists in stressing that short term sacrifice—namely, in the form of risky experimentation and self-experimentation—is a price that will probably need to be paid if the long-term aims of transhumanism are to be realized. Moreover, once we finally make the breakthrough to extend human life indefinitely, there may be a moral obligation to make room for future generations, which may take the form of sending the old into space or simply encouraging suicide.

2. How do you understand the relationship between AI and transhumanism?

When Julian Huxley coined ‘transhumanism’ in the 1950s, it was mainly about eugenics, the sort of thing that his brother Aldous satirized in Brave New World. The idea was that the transhuman would be a ‘new and improved’ human, not so different from new model car. (Recall that Henry Ford is the founding figure of Brave New World.) However, with the advent of cybernetics, also happening around the same time, the idea that distinctly ‘human’ traits might be instantiated in both carbon and silicon began to be taken seriously, with AI being the major long-term beneficiary of this line of thought. Some transhumanists, notably Ray Kurzweil, find the AI version especially attractive, perhaps because it caters to their ‘gnostic’ impulse to have the human escape all material constraints. In the transhumanist jargon, this is called ‘morphological freedom’, a sort of secular equivalent of pure spirituality. However, this is to take AI in a somewhat different direction from its founders in the era of cybernetics, which was about creating intelligent machines from silicon, not about transferring carbon-based intelligence into silicon form.

3. How seriously do you take talk (by Bill Gates and others) that AI is an existential risk?

Not very seriously— at least on its own terms. By the time some superintelligent machine might pose a genuine threat to what we now regard as the human condition, the difference between human and non-human will have been blurred, mainly via cyborg identities of the sort that Stephen Hawking might end up being seen as having been a trailblazer. Whatever political questions would arise concerning AI at that point would likely divide humanity itself profoundly and not be a simple ‘them versus us’ scenario. It would be closer to the Cold War choice of Communism vs Capitalism. But honestly, I think all this ‘existential risk’ stuff gets its legs from genuine concerns about cyberwarfare. But taken on its face, cyberwarfare is nothing more than human-on-human warfare conducted by high tech means. The problem is still mainly with the people fighting the war rather than the algorithms that they program to create these latest weapons of mass destruction. I wonder sometimes whether this fixation on superintelligent machines is simply an indirect way to get humans to become responsible for their own actions—the sort of thing that psychoanalysts used to call ‘displacement behavior’ but the rest of us call ‘ventriloquism’.

4. If, as Socrates claims, to philosophize is to learn how to die, does H+ represent the end of philosophy?

Of course not!  The question of death is just posed differently because even from a transhumanist standpoint, it may be in the best interest of humanity as a whole for individuals to choose death, so as to give future generations a chance to make their mark. Alternatively, and especially if transhumanists are correct that our extended longevity will be accompanied by rude health, then the older and wiser among us —and there is no denying that ‘wisdom’ is an age-related virtue—might spend their later years taking greater risks, precisely because they would be better able to handle the various contingencies. I am thinking that such healthy elderly folk might be best suited to interstellar exploration because of the ultra-high risks involved. Indeed, I could see a future social justice agenda that would require people to demonstrate their entitlement to longevity by documenting the increasing amount of risk that they are willing to absorb.

5. What of Heidegger’s claim that to be an authentic human being we must project our lives onto the horizon of our death?

I couldn’t agree more! Transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection. I think Heidegger and other philosophers have invested such great import on death simply because of its apparent irreversibility. However, if you want to recreate Heidegger’s sense of ‘ultimate concern’ in a post-death world, all you would need to do is to find some irreversible processes and unrecoverable opportunities that even transhumanists acknowledge. A hint is that when transhumanism was itself resurrected in its current form, it was known as ‘extropianism’, suggesting an active resistance to entropy. For transhumanists—very much in the spirit of the original cybernetician, Norbert Wiener—entropy is the ultimate irreversible process and hence ultimate challenge for the movement to overcome.

6. What is your response to Heidegger’s claim that it is in the confrontation with nothingness, in the uncanny, that we are brought back to ourselves?

Well, that certainly explains the phenomenon that roboticists call the ‘uncanny valley’, whereby people are happy to deal with androids until they resemble humans ‘a bit too much’, at which point people are put off. There are two sides to this response—not only that the machines seem too human but also that they are still recognized as machines. So the machines haven’t quite yet fooled us into thinking that they’re one of us. One hypothesis to explain the revulsion is that such androids appear to be like artificially animated dead humans, a bit like Frankenstein. Heideggerians can of course use all this to their advantage to demonstrate that death is the ultimate ‘Other’ to the human condition.

7. Generally, who do you think are the most important thinkers within the philosophic tradition for thinking about the implications of transhumanism?

Most generally, I would say the Platonic tradition, which has been most profound in considering how the same form might be communicated through different media. So when we take seriously the prospect that the ‘human’ may exist in carbon and/or silicon and yet remain human, we are following in Plato’s footsteps. Christianity holds a special place in this line of thought because of the person of Jesus Christ, who is somehow at once human and divine in equal and all respects. The branch of theology called ‘Christology’ is actually dedicated to puzzling over these matters, various solutions to which have become the stuff of science fiction characters and plots. St Augustine originally made the problem of Christ’s identity a problem for all of humanity when he leveraged the Genesis claim that we are created ‘image and the likeness of God’ to invent the concept of ‘will’ to name the faculty of free choice that is common to God and humans. We just exercise our wills much worse than God exercises his, as demonstrated by Adam’s misjudgment which started Original Sin (an Augustinian coinage). When subsequent Christian thinkers have said that ‘the flesh is weak’, they are talking about how humanity’s default biological conditions holds us back from fully realizing our divine potential. Kant acknowledged as much in secular terms when he explicitly defined the autonomy necessary for truly moral action in terms of resisting the various paths of least resistance put before us. These are what Christians originally called ‘temptations’, Kant himself called ‘heteronomy’ and Herbert Marcuse in a truly secular vein would later call ‘desublimation’.

8. One worry that arises from the Transhumanism project (especially about gene editing, growing human organs in animals, etc.) regards the treatment of human enhancement as “commercial products”. In other words, the worry is concerns the (further) commodification of life. Does this concern you? More generally, doesn’t H+ imply a perverse instrumentalization of our being?

My worries about commodification are less to do with the process itself than the fairness of the exchange relations in which the commodities are traded. Influenced by Locke and Nozick, I would draw a strong distinction between alienation and exploitation, which tends to be blurred in the Marxist literature. Transhumanism arguably calls for an alienation of the body from human identity, in the sense that your biological body might be something that you trade for a silicon upgrade, yet you humanity remains intact on both sides of the transaction, at least in terms of formal legal recognition. Historic liberal objections to slavery rested on a perceived inability to do this coherently. Marxism upped the ante by arguing that the same objections applied to wage labor under the sort of capitalism promoted by the classical political economists of his day, who saw themselves as scientific underwriters of the new liberal order emerging in post-feudal Europe. However, the force of Marxist objections rest on alienation being linked to exploitation. In other words, not only am I free to sell my body or labor, but you are also offer whatever price serves to close the sale. However, the sorts of power imbalances which lay behind exploitation can be—and have been—addressed in various ways. Admittedly more work needs to be done, but a time will come when alienation is simply regarded as a radical exercise of freedom—specifically, the freedom to, say, project myself as an avatar in cyberspace or, conversely, convert part of my being to property that can be traded from something that may in turn enhance my being.

9. Robert Nozick paints a possible scenario in Anarchy, State, and Utopia where he describes a “genetic supermarket” where we can choose our genes just as one selects a frozen pizza. Nozick’s scenario implies a world where human characteristics are treated in the way we treat other commercial products. In the Transhuman worldview, is the principle or ultimate value of life commercial?

There is something to that, in the sense that anything that permits discretionary choice will lend itself to commercialization unless the state intervenes—but I believe that the state should intervene and regulate the process. Unfortunately, from a PR standpoint, a hundred years ago that was called ‘eugenics’. Nevertheless, people in the future may need to acquire a license to procreate, and constraints may even be put on the sort of offspring are and are not permissible, and people may even be legally required to undergo periodic forms of medical surveillance—at least as a condition of employment or welfare benefits. (Think Gattaca as a first pass at this world.) It is difficult to see how an advanced democracy that acknowledges already existing persistent inequalities in life-chances could agree to ‘designer babies’ without also imposing the sort of regime that I am suggesting. Would this unduly restrict people’s liberty? Perhaps not, if people will have acquired the more relaxed attitude to alienation, as per my answer to the previous question. However, the elephant in the room—and which I argued in The Proactionary Imperative is more important—is liability. In other words, who is responsible when things go wrong in a regime which encourages people to experiment with risky treatments? This is something that should focus the minds of lawyers and insurers, especially in a world are presumed to be freer per se because they have freer access to information.

10. Is human enhancement consistent with other ways in which people modify their lifestyles, that is, are they analogous in principle to buying a new cell phone, learning a language or working out? Is it a process of acquiring ideas, goods, assets, and experiences that distinguish one person from another, either as an individual or as a member of a community? If not, how is human enhancement different?

‘Human enhancement’, at least as transhumanists understand the phrase, is about ‘morphological freedom’, which I interpret as a form of ultra-alienation. In other words, it’s not simply about people acquiring things, including prosthetic extensions, but also converting themselves to a different form, say, by uploading the contents of one’s brain into a computer. You might say that transhumanism’s sense of ‘human enhancement’ raises the question of whether one can be at once trader and traded in a way that enables the two roles to be maintained indefinitely. Classical political economy seemed to imply this, but Marx denied its ontological possibility.

11. The thrust of 20th Century Western philosophy could be articulated in terms of the strife for possible futures, whether that future be Marxist, Fascist, or other ideologically utopian schemes, and the philosophical fallout of coming to terms with their successes and failures. In our contemporary moment, it appears as if widespread enthusiasm for such futures has disappeared, as the future itself seems as fragmented as our society. H+ is a new, similar effort; but it seems to be a specific evolution of the futurism focused, not on a society, but on the human person (even, specific human persons). Comments?

In terms of how you’ve phrased your question, transhumanism is a recognizably utopian scheme in nearly all respects—including the assumption that everyone would find its proposed future intrinsically attractive, even if people disagree on how or whether it might be achieved. I don’t see transhumanism as so different from capitalism or socialism as pure ideologies in this sense. They all presume their own desirability. This helps to explain why people who don’t agree with the ideology are quickly diagnosed as somehow mentally or morally deficient.

12. A common critique of Heidegger’s thought comes from an ethical turn in Continental philosophy. While Heidegger understands death to the harbinger of meaning, he means specifically and explicitly one’s own death. Levinas, however, maintains that the primary experience of death that does this work is the death of the Other. One’s experience with death comes to one through the death of a loved one, a friend, a known person, or even through the distant reality of a war or famine across the world. In terms of this critique, the question of transhumanism then leads to a socio-ethical concern: if one, using H+ methods, technologies, and enhancements, can significantly inoculate oneself against the threat of death, how ethically (in the Levinasian sense) can one then legitimately live in relation to others in a society, if the threat of the death of the Other no longer provides one the primal experience of the threat of death?

Here I’m closer to Heidegger than Levinas in terms of grounding intuition, but my basic point would be that an understanding of the existence and significance of death is something that can be acquired without undergoing a special sort of experience. Phenomenologically inclined philosophers sometimes seem to assume that a significant experience must happen significantly. But this is not true at all. My main understanding of death as a child came not from people I know dying, but simply from watching the morning news on television and learning about the daily body count from the Vietnam War. That was enough for me to appreciate the gravity of death—even before I started reading the Existentialists.

Editor’s Note:

    The following are elements of syllabi for a graduate, and an undergraduate, course taught by Robert Frodeman in spring 2017 at the University of North Texas. These courses offers an interesting juxtaposition of texts aimed at reimagining how to perform academic philosophy as “field philosophy”. Field philosophy seeks to address meaningfully, and demonstrably, contemporary public debates, regarding transhumanism for example, given attention to shifting ideas and frameworks of both the Humboldtian university and the “new American” university.

Shortlink: http://wp.me/p1Bfg0-3xB

Philosophy 5250: Topics in Philosophy

Overall Theme

This course continues my project of reframing academic philosophy within the approach and problematics of field philosophy.

In terms of philosophic categories, we will be reading classics in 19th and 20th century continental philosophy: Hegel, Nietzsche, and Heidegger. But we will be approaching these texts with an agenda: to look for insights into a contemporary philosophical controversy, the transhumanist debate. This gives us two sets of readings – our three authors, and material from the contemporary debate surrounding transhumanism.

Now, this does not mean that we will restrict our interest in our three authors to what is applicable to the transhumanist debate; our thinking will go wherever our interests take us. But the topic of transhumanism will be primus inter pares.

Readings

  • Hegel, Phenomenology of Spirit, Preface
  • Hegel, The Science of Logic, selections
  • Heidegger, Being and Time, Division 1, Macquarrie translation
  • Heidegger, ‘The Question Concerning Technology’
  • Nietzsche, selections from Thus Spoke Zarathustra and Beyond Good and Evil

Related Readings

Grading

You will have two assignments, both due at the end of the semester. I strongly encourage you to turn in drafts of your papers.

  • A 2500 word paper on a major theme from one of our three authors.
  • A 2500 word paper using our three authors to illuminate your view of the transhumanist challenge.

Philosophy 4750: Philosophy and Public Policy

Overview

This is a course in meta-philosophy. It seeks to develop a philosophy adequate for the 21st century.

Academic philosophy has been captured by a set of categories (ancient, modern, contemporary; ethics, logic, metaphysics, epistemology) that are increasingly dysfunctional for contemporary life. Therefore, this is not merely a course on a specific subject matter (i.e., ‘public policy’) to be added to the rest. Rather, it seeks to question, and philosophize about, the entire knowledge enterprise as it exists today – and to philosophize about the role of philosophy in understanding and perhaps (re)directing the knowledge enterprise.

The course will cover the following themes:

  • The past, present, and future of the university in the Age of Google
  • The end of disciplinarity and the rise of accountability culture
  • The New Republic of Letters and the role of the humanist today
  • The failure of applied philosophy and the development of alternative models

Course Structure

This course is ‘live’: it reflects 20 years of my research on place of philosophy in contemporary society. As such, the course embodies a Humboldtian connection between teaching and research: I am not simply a teacher and a researcher; I’m a teacher-researcher who shares the insights I’m developing with students, testing my thinking in the classroom, and sharing my freshest thoughts. This breaks with the corporate model of education where the professor is an interchangeable cog, teaching the same materials that could be gotten at any university worldwide – while also opening me up to charges of self-indulgence.

Readings

  • Michael M. Crow and William B. Dabars, Designing the New American University
  • Crow chapter in HOI
  • Clark, Academic Charisma
  • Fuller, The Academic Caesar
  • Rudy, The Universities of Europe, 1100-1914
  • Fuller, Sociology of Intellectual Life
  • Smith, Philosophers 6 Types
  • Socrates Tenured: The Institutions of 21st Century Philosophy
  • Plato, The Republic, Book 1

Author Information: Jason M. Pittman, Capitol Technology University, jmpittman@captechu.edu

Pittman, Jason M. “Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 21-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3tZ

Please refer to:

bionic_eye

Image credit: PhOtOnQuAnTiQu, via flickr

Abstract

Zero-knowledge proof serves as the fundamental basis for technological concepts of trust. The most familiar applied solution of technological trust is authentication (human-to-machine and machine-to-machine), most typically a simple password scheme. Further, by extension, much of society-generated knowledge presupposes the immutability of such a proof system when ontologically considering (a) the verification of knowing and (b) the amount of knowledge required to know. In this work, I argue that the zero-knowledge proof underlying technological trust may cease to be viable upon realization of partial transhumanism in the form of embedded nanotechnology. Consequently, existing normative social components of knowledge—chiefly, verification and transmission—may be undermined. In response, I offer recommendations on potential society-centric remedies in partial trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Password based authentication features prominently in daily life. For many us, authentication is a ritual repeated many times on any given day as we enter a username and password into various computing systems. In fact, research (Florêncio & Herley, 2007; Sasse, Steves, Krol, & Chisnell, 2014) revealed that we, on average, enter approximately eight different username and password combinations as many as 23 times a day. The number of times a computing system authenticates to another system is even more frequent. Simply put, authentication is normative in modern, technologically mediated life.

Indeed, authentication has been the normal modality of establishing trust within the context of technology (and, by extension, technology mediated knowledge) for several decades. Over the course of these decades, researchers have uncovered a myriad of flaws in specific manifestations of authentication—weak algorithms, buggy software, or even psychological and cognitive limits of the human mind. Upon closer inspection, one can surmise that the philosophy associated with passwords has not changed. Authentication continues to operate on the fundamental paradigm of a secret, a knowledge-prover, and a knowledge-verifier. The epistemology related to password-based authentication—how the prover establishes possession of the secret such that the verifier can trust the prover without the prover revealing the secret—presents a future problem.

A Partial Transhuman Reality

While some may consider transhumanism to be the province of science fiction, others such as Kurzweil (2005) argue that the merging of Man and Machine is already begun. Of notable interest in this work is partial-transhumanist nanotechnology or, in simple terms, the embedding of microscopic computing systems in our bodies. Such nanotechnology need not be fully autonomous but typically does include some computational sensing ability. The most advanced example are the nanomachines that are used in medicine (Verma, Vijaysingh, & Kushwaha, 2016). Nevertheless, such nanotechnology represents the blueprint for rapid advancement. In fact, research is well underway on using nanomachines (or nanite) for enhanced cognitive computations (Fukushima, 2016).

At the crossroads of partial transhumanism (nanotechnology) and authentication there appears to be a deeper problem. In short, partial-transhumanism may obviate the capacity for a verifier to trust whether a prover, in truth, possesses a secret. Should a verifier not be able to trust a prover, the entirety of authentication may collapse.

Much research does exist that investigates the mathematical basis, the psychological basis, and the technological basis for authentication. There has been little philosophical exploration of authentication. Work such as that of Qureshi, Younus, and Khan (2009) developed a general philosophical overview of password-based authentication but largely focused on developing a philosophical taxonomy to overlay modern password technology. The literature extending Qureshi et al. exclusively builds upon the strictly technical side of password-based authentication, ignoring the philosophical.

Accordingly, the purpose of this work is to describe the concepts directly linked to modern technological trust in authentication and demonstrate how, in a partial transhumanist reality, the concepts of zero-knowledge proof may cease to be viable. Towards this end, I will describe the conceptual framework underlying the operational theme of this work. Then, I explore the abstraction of technological trust as such relates to understanding proof of knowledge. This understanding of where trust fits into normative social epistemology will inform the subsequent description of the problem space. After that, I move on to describe the conceptual architecture of zero-knowledge proofs which serve as the pillars of modern authentication and how transhumanism may adversely impact such. Finally, I will present recommendations on possible society-centric remedies in both partial trans-humanistic as well as full trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Conceptual Framework

Establishing a conceptual framework before delving too far into building the case for trust ceasing to be viable in partial transhumanist reality will permit a deeper understanding of the issue at hand. Such a frame of reference must necessarily include a discussion of how technology inherently mediates our relationship with other humans and technologies. Put another way; technologies are unmistakably involved in human subjectivity while human subjectivity forms the concept of technology (Kiran & Verbeek, 2010). This presupposes a grasp of the technological abstraction though.

Broadly, technology in the context of this work is taken to mean qualitative (abstract) applied science as opposed to practical or quantitative applied of science. This definition follows closely with recent discussions on technology by Scalambrino (2016) and the body of work by Heidegger and Plato. In other words, technology should be understood as those modalities that facilitate progress relative to socially beneficial objectives. In specific, we are concerned with the knowledge modality as opposed to discrete mechanisms, objects, or devices.

What is more, the adjoining of technology, society, and knowledge is a critical element in the conceptual framework for this work. Technology is no longer a single-use, individualized object. Instead, technology is a social arbiter that has grown to be innate to what Idhe (1990) related as a normative human gestalt. While this view is a contrast to views such as offered by Feenberg (1999), the two are not exclusive necessarily.

Further, we must establish the component of our conceptual framework that evidences what it means to verify knowledge. One approach is a scientific model that procedurally quantifies knowledge within a predefined structure. Given the technological nature of this work, such may be inescapable at least as a cognitive bias. More abstractly though, verification of knowledge is conducted by inference whether by the individual or across social collectives. The mechanism of inference, in turn, can be expressed in proof.   Similarly, on inference through proof, another component in our conceptual framework corresponds to the amount of knowledge necessary to demonstrate knowing. As I discuss later, the amount of knowing is either full or limited. That is, proof of knowledge or proof without knowledge.

Technological Trust

The connection between knowledge and trust has a strong history of debate in the social epistemic context. This work is not intended to directly add to the debate surrounding trust. However, recognition of the debate is necessary to develop the bridge connecting trust and zero-knowledge proofs before moving onto zero-knowledge proof and authentication. Further, conceptualizing technological trust permits the construction of a foundation for the central proposition in this work.

To the point, Simon (2013) argued that knowledge relies on trust. McCraw (2015) extended this claim by establishing four components of epistemic trust: belief, communication, reliance, and confidence. These components are further grouped into epistemic (belief and communication) as well as trust (reliance and confidence) conditionals (2015). Trust, in this context, exemplifies the social aspect of knowledge insofar as we do not directly experience trust but hold trust as valid because of the collective position of validity.

Furthermore, Simmel (1978) perceived trust to be integral to society. That is, trust as a knowledge construct, exists in many disciplines and, per Origgi (2004) permeates our cognitive existence. Additionally, there is an argument to be made that, by using technology, we implicitly place trust in such technology (Kiran & Verbeek, 2010). Nonetheless, trust we do.

Certainly, part of such trust is due to the mediation provided by our ubiquitous technology. As well, trust in technology and trust from technology are integral functions of modern social perspectives. On the other hand, we must be cautious in understanding the conditions that lead to technological trust. Work by Idhe (1979; 1990) and others have suggested that technological trust stems from our relation to the technology. Perhaps closer to transhumanism, Levy (1998) offered that such trust is more associated with technology that extends us.

Technology that extends human capacity is a principal abstraction. As well, concomitant to technological trust is knowledge. While the conceptual framework for this work includes verification of knowledge as well as the amount of knowledge necessary to evidence knowing, there is a need to include knowledge proofs in the discourse.

Zero-Knowledge Proof

Proof of knowledge is a logical extension of the discussion of trust. Where trust can be thought as the mechanism through which we allow technology to mediate reality, proof of knowledge is how we come to trust specific forms of technology. In turn, proof of knowledge—specifically, zero-knowledge proof—provides a foundation for trust in technological mediation in the general case and technological authentication in the specific case.

The Nature of Proof

The construct of proof may adopt different meaning depending upon the enveloping context. In the context of this work, we use the operational meaning provided by Pagin (1994). In other words, the proof is established during the process of validating the correctness of a proposition. Furthermore, for any proof to be perceived as valid, such must demonstrate elements of completeness and soundness (Pagin, 1994; 2009).

There is, of course, a larger discourse on the epistemic constraints of proof (Pagin, 1994; Williamson, 2002; Marton, 2006). Such lies outside of the scope of this work however as we are not concerned with can proof be offered for knowledge but rather how proof occurs. In other words, we are interested in the mechanism of proof. Thus, for our purposes, we presuppose that proof of knowledge is possible and is so in through two possible operations: proof with knowledge and proof without knowledge.

Proof with Knowledge

A consequence of typical proof system is that all involved parties gain knowledge. That is, if I know x exists in a specific truth condition, I must present all relevant premises so that you can reach the same conclusion. Thus, the proposition is not only true or false to us both equally but also the means of establishing such truth or falsehood is transparent. This is what can be referred to as proof of knowledge.

In most scenarios, proof with knowledge is a positive mechanism. That is, the parties involved mutually benefit from the outcome. Mathematics and logic are primary examples of this proof state. However, when considering the case of technological trust in the form of authentication proof with knowledge is not desirable.

Proof Without Knowledge

Imagine that you that know that p is true. Further, you wish to demonstrate to me that you know this without revealing how you came to know or what it is exactly that you know. In other words, you wish to keep some aspect of the knowledge secret. I must validate that you know p without gaining any knowledge. This is the second state of proof known as zero-knowledge proof and forms the basis for technological trust in the form of authentication.

Goldwasser, Micali, and Rackoff (1989) defined zero-knowledge proofs as a formal, systematic approach to validating the correctitude of a proposition without communicating additional knowledge. Extra in this context can be taken to imply knowledge other than the proposition itself. An important aspect is that the proposition originates with a verifier entity as opposed to a prover entity. In response to the proposition to be proven, the prover completes an action without revealing any knowledge to the verifier other than the knowledge that the action was completed. If the proposition is probabilistically true, the verifier is satisfied. Note that the verifier and prover entities can be in the form of machine-to-human, human-to-human, or machine-to-machine.

Zero-knowledge proofs are the core of technological trust and, accordingly, authentication. While discrete instances of authentication exist practically outside of the social epistemic purview, the broader theory of authentication is, in fact, a socially collective phenomenon. That is, even in the abstract, authentication is a specific case for technologically mediated trust.

Authentication

The zero-knowledge proof abstraction translates directly into modern authentication modalities. In general, authentication involves a verifier issuing a request to prove knowledge and a prover establishing knowledge by means of a secret to the verifier. Thus, the ability to provide such proof in a manner that is consistent with the verifier request is technologically sufficient to authenticate (Syverson & Cervesato, 2000). However, there are subtleties within the authentication zero-knowledge proof that warrant discussion.

Authentication, or being authenticated, implies two technologically mediated realities. First, the authentication process relies upon the authenticating entity (i.e., the prover) possessing a secret exclusively. The mediated reality for both the verifier and the prover is that to be authenticated implies an identity. In simple terms, I am who I claim to be based on (a) exclusive possession of the secret; and (b) the ability to sufficiently demonstrate such through the zero-knowledge proof to the verifier. Likewise, the verifier is identified to the prover.

Secondly, authentication establishes a general right of access for the verifier based on, again, possession of an exclusive secret. Consequently, there is a technological mediation of what objects are available to the verifier once authenticated (i.e., all authorized objects) or not authenticated (i.e., no objects). Thus, the zero-knowledge proof is a mechanism of associating the prover’s identity with a set of objects in the world and facilitating access to those objects. That is to say, once authenticated, the identity has operational control within corresponding space over linked objects.

Normatively, authentication is a socially collective phenomenon despite individual authentication relying upon exclusive zero-knowledge proof (Van Der Meyden & Wilke, 2007). Principally, authentication is a means of interacting with other humans, technology, and society at large while maintaining trust. However, if authentication is a manifestation of technological trust, one must wonder if transhumanism may affect the zero-knowledge proof abstraction.

Transhumanism

More (1990) described transhumanism as a philosophy that embraces the profound changes to society and the individual brought about by science and technology. There is strong debate as to when such change will occur although most futurists argue that technology has already begun to transcend the breaking point of explosive growth. Technology in this context aligns with the conceptual framework of this work. As well, there is an agreement in the philosophical literature with the idea of such technological expansion (Bostrom, 1998; More, 2013).

Furthermore, transhumanism exists in two forms: partial transhumanism and full transhumanism (Kurzweil, 2005). This work is concerned with partial transhumanism exclusively. Furthermore, partial transhumanism is inclusive of three modalities. According to Kurzweil (2005), these modalities are (a) technology sufficient to manipulate human life genetically; (b) nanotechnology; and (c) robotics. In the context of this work, I am interested in the potentiality of nanotechnology.

Briefly, nanotechnology exists in several forms. The form central to this work involves embedding microscopic machines within human biology. These machines can perform any number of operations, including augmenting existing bodily systems. Along these lines, Vinge (1993) argued that a by-product of technological expansion will be the monumental increase in human intelligence. Although there are a variety of mechanisms by which technology will amplify raw brainpower, nanotechnology is a forerunner in the mind of Kurzweil and others.

What is more, the computational power of nanites is measurable and predictable (Chau, et al., 2005; Bhore, 2016). The amount of human intellectual capacity projected to result from nanotechnology may be sufficient to impart hyper-cognitive or even extrasensory abilities. With such augmentation, the human mind will be capable of computational decision-making well beyond existing technology.

While the notion of nanites embedded in our bodies, augmenting various biomechanical systems to the point of precognitive awareness of zero-knowledge proof verification, may strike some as science fiction, there is growing precedent. Existing research in the field of medicine demonstrates that at least partially autonomous nanites have a grounding in reality (Huilgol & Hede, 2006; Das et al., 2007; Murray, Siegel, Stein, & Wright, 2009). Thus, envisioning a near future where more powerful and autonomous nanites are available is not difficult.

Technological Trust in Authentication

The purpose of this work was to describe technological trust in authentication and demonstrate how, in a future partial transhumanist reality, the concepts of zero-knowledge proof will cease to be viable. Towards that end, I examined technological trust in the context of how and why such trust is established. Further, knowledge proofs were discussed with an emphasis on proofs without knowledge. Such led to an overview of authentication and, subsequently, transhumanism.

Based on the analysis so far, the technological trust afforded by such proof appears to be no longer feasible once embedded nanotechnology is introduced into humans. Nanite augmented cognition will result in the capability for a knowledge-prover to, on demand, compute knowledge sufficient to convince a knowledge-verifier. Outright, such a reality breaks the latent assumptions that operationalize the conceptual framework into related technology. That is, once the knowledge-verifier cannot trust that the knowledge is known by the prover, a significant future problem arises.

Unfortunately, the fields of computer science and computer engineering do not historically plan for paradigm shifting innovations well. Such is exacerbated when the paradigm shift has rapid onset after a long ramp-up time as is the case with the technological singularity. More specifically, partial transhumanism as considered in this work may have unforeseen effects beyond the scope of the fields that created the technology in the first place. The inability to handle rapid shifts is largely related to these fields posing what is type questions.

Similarly, the Collingridge dilemma tells us that, “…the social consequences of a technology cannot be predicated early in the life of the technology” (1980, p. 11). Thus, adequate preparation for the eventual collapse of zero-knowledge proof requires asking what ought to be. Such a question is a philosophical question. As it stands, recognition of social epistemology as an interdisciplinary field already exists (Froehlich, 1989; Fuller, 2005; Zins, 2006). More still, there is a precedent for philosophy informing the science of technology (Scalambrino, 2016) and assembling the foundation of future looking paradigm shifts.

Accordingly, a recommendation is for social epistemologists and technologists to jointly examine modifications to the abstract zero-knowledge proof such that the proof is resilient to nanite-powered knowledge computation. In conjunction, there may be a benefit in attempting to conceive of a replacement proof system that also harnesses partial-transhumanism for the knowledge-verifier in a manner commensurate with any increase in capacity for the knowledge-prover. Lastly, a joint effort may be able to envision a technologically mediated construct that does not require proof without knowledge at all.

References

Bhore, Pratik Rajan “A Survey of Nanorobotics Technology.” International Journal of Computer Science & Engineering Technology 7, no. 9 (2016): 415-422.

Bostrom, Nick. Predictions from Philosophy? How Philosophers Could Make Themselves Useful. (1998). http://www.nickbostrom.com/old/predict.html

Chau, Robert, Suman Datta, Mark Doczy, Brian Doyle, Ben Jin, Jack Kavalieros, Amlan Majumdar, Matthew Metz and Marko Radosavljevic. “Benchmarking Nanotechnology for High-Performance and Low-Power Logic Transistor Applications.” IEEE Transactions on Nanotechnology 4, no. 2 (2005): 153-158.

Collingridge, David. The Social Control of Technology. New York: St. Martin’s Press, 1980.

Das, Shamik, Alexander J. Gates, Hassen A. Abdu, Garrett S. Rose, Carl A. Picconatto, and James C. Ellenbogen “Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.” IEEE Transactions on Circuits and Systems I: Regular Papers 54, no. 11 (2007): 2528–2540.

Feenberg, Andew. Questioning Technology. London: Routledge, 1999.

Florencio, Dinei and Cormac Herley. “A Large-Scale Study of Web Password Habits.” In WWW 07 Proceedings of the 16th International Conference on World Wide Web. 657-666.

Froehlich, Thomas J. “The Foundations of Information Science in Social Epistemology.”  In System Sciences, 1989. Vol. IV: Emerging Technologies and Applications Track, Proceedings of the Twenty-Second Annual Hawaii International Conference, 4 (1989): 306-314.

Fukushima, Masato. “Blade Runner and Memory Devices: Reconsidering the Interrelations between the Body, Technology, and Enhancement.” East Asian Science, Technology and Society 10, no. 1 (2016): 73-91.

Fuller, Steve. “Social Epistemology: Preserving the Integrity of Knowledge About Knowledge.” In Handbook on the Knowledge Economy, edited by David Rooney, Greg Hearn and Abraham Ninan, 67-79. Cheltenham, UK: Edward Elgar, 2005.

Goldwasser, Shafi, Silvio M. Micali and Charles Rackoff. “The Knowledge Complexity of Interactive Proof Systems.” SIAM Journal on Computing 18, no. 1 (1989): 186-208.

Huilgol, Nagraj and Shantesh Hede. “ ‘Nano’: The New Nemesis of Cancer.” Journal of Cancer Research and Therapeutics 2, no. 4 (2006): 186–95.

Ihde, Don. Technics and Praxis. Dordrecht: Reidel, 1979.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth. Bloomington: Indiana University Press, 1990.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Penguin Books. 2005.

Lévy, Pierre. Becoming Virtual. Reality in the Digital Age. New York: Plenum Trade, 1998.

Marton, Pierre. “Verificationists Versus Realists: The Battle Over Knowability. Synthese 151, no. 1 (2006): 81-98.

More, Max. “Transhumanism: Towards a Futurist Philosophy.” Extropy, 6 (1990): 6-12.

More, Max. (2013) The philosophy of transhumanism, In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (eds M. More and N. Vita-More), John Wiley & Sons, Oxford. doi: 10.1002/9781118555927.ch1

Murday, J. S.; Siegel, R. W.; Stein, J.; Wright, J. F. (2009). Translational nanomedicine: Status assessment and opportunities. Nanomedicine: Nanotechnology, Biology and Medicine, 5(3). 251–273. doi:10.1016/j.nano.2009.06.001

Origgi, Gloria. “Is Trust an Epistemological Notion?” Episteme 1, no. 1 (2004): 61-72.

Pagin, Peter. “Knowledge of Proofs.” Topoi 13, no. 2 (1994): 93-100.

Pagin, Peter. “Compositionality, Understanding, and Proofs. Mind 118, no. 471 (2009): 713-737.

Qureshi, M. Atif, Arjumand Younus and Arslan Ahmed Khan Khan. “Philosophical Survey of Passwords.” International Journal of Computer Science Issues 1 (2009): 8-12.

Sasse, M. Angela, Michelle Steves, Kat Krol, and Dana Chisnell. “The Great Authentication Fatigue – And How To Overcome It.” In Cross-Cultural Design, edited by PLP Rau, 6th International Conference, CCD 2014 Held as Part of HCI International 2014 Heraklion, Crete, Greece, June 22-27, 2014: Proceedings, 228-239. Springer International Publishing: Cham, Switzerland.

Scalambrino, Frank. Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation. London New York: Rowman & Littlefield International, 2016.

Simmel, Georg.  The Philosophy of Money. London: Routledge and Kegan Paul, 1978.

Simon, Judith. “Trust, Knowledge and Responsibility in Socio-Technical Systems.” University of Vienna and Karlsruhe Institute of Technology, 2013. https://www.iiia.csic.es/en/seminary/trust-knowledge-and-responsibility-socio-technical-systems

Syverson, Paul and Iliano Cervesato. “The Logic of Authentication Protocols.” In Proceeding FOSAD ’00 Revised versions of lectures given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design on Foundations of Security Analysis and Design: Tutorial Lectures, 63-136. London: Springer-Verlag, 2001.

Williamson, Timothy. Knowledge and its Limits. Oxford University Press on Demand, 2002.

Van Der Meyden, Ron and Thomas Wilke. “Preservation of Epistemic Properties in Security Protocol Implementations.” In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge (2007): 212-221.

Vinge, Verner. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio March 30-31, 1993, NASA Conference Publication 10129 (1993): 11-22,

Verma, S., K. Vijaysingh and R. Kushwaha. “Nanotechnology: A Review.” In Proceedings of the Emerging Trends in Engineering & Management for Sustainable Development, Jaipur, India, 19–20 February 2016.

Zins, Chaim. “Redefining Information Science: From ‘Information Science’ to ‘Knowledge Science’.” Journal of Documentation 62, no. 4, (2006). 447-461.

Justin Cruickshank at the University of Birmingham was kind enough to alert me to Steve Fuller’s talk “Transhumanism and the Future of Capitalism”—held by The Philosophy of Technology Research Group—on 11 January 2017.

Author Information: Mark Shiffman, Villanova University, mark.shiffman@villanova.edu

Shiffman, Mark. “Real Alternatives on Decisive Issues: A Response to Alcibiades Malapi-Nelson.” Social Epistemology Review and Reply Collective 5, no. 4 (2016): 52-55.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2U9

Please refer to:

blue_moon

Image credit: NASA Goddard Space Flight Center, via flickr

My thanks to Dr. Malapi-Nelson for his attention (2016) to my article (2015) and some very kind words he had for it. As a part-time classicist and Socratic philosopher, it is of course an unusual delight to be criticized by an Alcibiades. I am put in mind of Plutarch’s life of that flamboyant character, which seems to suggest that Socrates made Alcibiades less destructive by making him realize that his hyperbolic desires were inherently insatiable, thus reigning in his tyrannical impulses by rendering him incapable of taking his political aims too seriously. There may be some analogy to the effect I would like to have on the extravagant fantasies of transhumanism, with their potential for destroying humane limits in the name of an infinite dissatisfaction with given reality. (I think Bob Frodeman and I are pulling together on this, however mismatched a pair of draft animals we may otherwise be.)  Continue Reading…

Author Information: William T. Lynch, Wayne State University, William.Lynch@wayne.edu

Lynch, William T. “Darwinian Social Epistemology: Science and Religion as Evolutionary Byproducts Subject to Cultural Evolution.” Social Epistemology Review and Reply Collective 5, no. 2 (2016): 26-68.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2Ci

Dawn

Image credit: Susanne Nilsson, via flickr

Abstract

Key to Steve Fuller’s recent defense of intelligent design is the claim that it alone can explain why science is even possible. By contrast, Fuller argues that Darwinian evolutionary theory posits a purposeless universe which leaves humans with no motivation to study science and no basis for modifying an underlying reality. I argue that this view represents a retreat from insights about knowledge within Fuller’s own program of social epistemology. I show that a Darwinian picture of science, as also of religion, can be constructed that explains how these complex social institutions emerged out of a process of biological and cultural evolution. Science and religion repurpose aspects of our evolutionary inheritance to the new circumstances of more complex societies that have emerged since the Neolithic revolution.  Continue Reading…

Author Information: Alcibiades Malapi-Nelson, York University, alci.malapi@outlook.com

Malapi-Nelson, Alcibiades . “Transhumanism, Christianity and Modern Science: Some Clarifying Points Regarding Shiffman’s Criticism of Fuller.” Social Epistemology Review and Reply Collective 5, no. 2 (2016): 1-5.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2Ah

voyage_alchimique

Image credit: ImAges ImprObables, via flickr

Mark Shiffman recently published a review of Steve Fuller’s The Proactionary Imperative in the Journal of Religion and Public Life First Things (“Humanity 4.5”, Nov. 2015). While the main synopsis of Fuller’s argument regarding tranhumanism seems fair and accurate, there are a number of points where the author likely does not entirely get Fuller’s views within a broader context—namely, that of Fuller’s previous work. Also, Shiffman does not clarify features of his own theoretical context that later trigger some amount of confusion.  Continue Reading…