Archives For transhumanism

The following are a set of questions concerning the place of transhumanism in the Western philosophical tradition that Robert Frodeman’s Philosophy 5250 class at the University of North Texas posed to Steve Fuller, who met with the class via Skype on 11 April 2017.

Shortlink: http://wp.me/p1Bfg0-3yl

Image credit: Joan Sorolla, via flickr

1. First a point of clarification: we should understand you not as a health span increaser, but rather as interested in infinity, or in some sense in man becoming a god? That is, H+ is a theological rather than practical question for you?

Yes, that’s right. I differ from most transhumanists in stressing that short term sacrifice—namely, in the form of risky experimentation and self-experimentation—is a price that will probably need to be paid if the long-term aims of transhumanism are to be realized. Moreover, once we finally make the breakthrough to extend human life indefinitely, there may be a moral obligation to make room for future generations, which may take the form of sending the old into space or simply encouraging suicide.

2. How do you understand the relationship between AI and transhumanism?

When Julian Huxley coined ‘transhumanism’ in the 1950s, it was mainly about eugenics, the sort of thing that his brother Aldous satirized in Brave New World. The idea was that the transhuman would be a ‘new and improved’ human, not so different from new model car. (Recall that Henry Ford is the founding figure of Brave New World.) However, with the advent of cybernetics, also happening around the same time, the idea that distinctly ‘human’ traits might be instantiated in both carbon and silicon began to be taken seriously, with AI being the major long-term beneficiary of this line of thought. Some transhumanists, notably Ray Kurzweil, find the AI version especially attractive, perhaps because it caters to their ‘gnostic’ impulse to have the human escape all material constraints. In the transhumanist jargon, this is called ‘morphological freedom’, a sort of secular equivalent of pure spirituality. However, this is to take AI in a somewhat different direction from its founders in the era of cybernetics, which was about creating intelligent machines from silicon, not about transferring carbon-based intelligence into silicon form.

3. How seriously do you take talk (by Bill Gates and others) that AI is an existential risk?

Not very seriously— at least on its own terms. By the time some superintelligent machine might pose a genuine threat to what we now regard as the human condition, the difference between human and non-human will have been blurred, mainly via cyborg identities of the sort that Stephen Hawking might end up being seen as having been a trailblazer. Whatever political questions would arise concerning AI at that point would likely divide humanity itself profoundly and not be a simple ‘them versus us’ scenario. It would be closer to the Cold War choice of Communism vs Capitalism. But honestly, I think all this ‘existential risk’ stuff gets its legs from genuine concerns about cyberwarfare. But taken on its face, cyberwarfare is nothing more than human-on-human warfare conducted by high tech means. The problem is still mainly with the people fighting the war rather than the algorithms that they program to create these latest weapons of mass destruction. I wonder sometimes whether this fixation on superintelligent machines is simply an indirect way to get humans to become responsible for their own actions—the sort of thing that psychoanalysts used to call ‘displacement behavior’ but the rest of us call ‘ventriloquism’.

4. If, as Socrates claims, to philosophize is to learn how to die, does H+ represent the end of philosophy?

Of course not!  The question of death is just posed differently because even from a transhumanist standpoint, it may be in the best interest of humanity as a whole for individuals to choose death, so as to give future generations a chance to make their mark. Alternatively, and especially if transhumanists are correct that our extended longevity will be accompanied by rude health, then the older and wiser among us —and there is no denying that ‘wisdom’ is an age-related virtue—might spend their later years taking greater risks, precisely because they would be better able to handle the various contingencies. I am thinking that such healthy elderly folk might be best suited to interstellar exploration because of the ultra-high risks involved. Indeed, I could see a future social justice agenda that would require people to demonstrate their entitlement to longevity by documenting the increasing amount of risk that they are willing to absorb.

5. What of Heidegger’s claim that to be an authentic human being we must project our lives onto the horizon of our death?

I couldn’t agree more! Transhumanism just puts more options on the table for what death looks like. For example, one might choose to die with or without the prospect of future resurrection. One might also just upload one’s mind into a computer, which would be its own special kind of resurrection. I think Heidegger and other philosophers have invested such great import on death simply because of its apparent irreversibility. However, if you want to recreate Heidegger’s sense of ‘ultimate concern’ in a post-death world, all you would need to do is to find some irreversible processes and unrecoverable opportunities that even transhumanists acknowledge. A hint is that when transhumanism was itself resurrected in its current form, it was known as ‘extropianism’, suggesting an active resistance to entropy. For transhumanists—very much in the spirit of the original cybernetician, Norbert Wiener—entropy is the ultimate irreversible process and hence ultimate challenge for the movement to overcome.

6. What is your response to Heidegger’s claim that it is in the confrontation with nothingness, in the uncanny, that we are brought back to ourselves?

Well, that certainly explains the phenomenon that roboticists call the ‘uncanny valley’, whereby people are happy to deal with androids until they resemble humans ‘a bit too much’, at which point people are put off. There are two sides to this response—not only that the machines seem too human but also that they are still recognized as machines. So the machines haven’t quite yet fooled us into thinking that they’re one of us. One hypothesis to explain the revulsion is that such androids appear to be like artificially animated dead humans, a bit like Frankenstein. Heideggerians can of course use all this to their advantage to demonstrate that death is the ultimate ‘Other’ to the human condition.

7. Generally, who do you think are the most important thinkers within the philosophic tradition for thinking about the implications of transhumanism?

Most generally, I would say the Platonic tradition, which has been most profound in considering how the same form might be communicated through different media. So when we take seriously the prospect that the ‘human’ may exist in carbon and/or silicon and yet remain human, we are following in Plato’s footsteps. Christianity holds a special place in this line of thought because of the person of Jesus Christ, who is somehow at once human and divine in equal and all respects. The branch of theology called ‘Christology’ is actually dedicated to puzzling over these matters, various solutions to which have become the stuff of science fiction characters and plots. St Augustine originally made the problem of Christ’s identity a problem for all of humanity when he leveraged the Genesis claim that we are created ‘image and the likeness of God’ to invent the concept of ‘will’ to name the faculty of free choice that is common to God and humans. We just exercise our wills much worse than God exercises his, as demonstrated by Adam’s misjudgment which started Original Sin (an Augustinian coinage). When subsequent Christian thinkers have said that ‘the flesh is weak’, they are talking about how humanity’s default biological conditions holds us back from fully realizing our divine potential. Kant acknowledged as much in secular terms when he explicitly defined the autonomy necessary for truly moral action in terms of resisting the various paths of least resistance put before us. These are what Christians originally called ‘temptations’, Kant himself called ‘heteronomy’ and Herbert Marcuse in a truly secular vein would later call ‘desublimation’.

8. One worry that arises from the Transhumanism project (especially about gene editing, growing human organs in animals, etc.) regards the treatment of human enhancement as “commercial products”. In other words, the worry is concerns the (further) commodification of life. Does this concern you? More generally, doesn’t H+ imply a perverse instrumentalization of our being?

My worries about commodification are less to do with the process itself than the fairness of the exchange relations in which the commodities are traded. Influenced by Locke and Nozick, I would draw a strong distinction between alienation and exploitation, which tends to be blurred in the Marxist literature. Transhumanism arguably calls for an alienation of the body from human identity, in the sense that your biological body might be something that you trade for a silicon upgrade, yet you humanity remains intact on both sides of the transaction, at least in terms of formal legal recognition. Historic liberal objections to slavery rested on a perceived inability to do this coherently. Marxism upped the ante by arguing that the same objections applied to wage labor under the sort of capitalism promoted by the classical political economists of his day, who saw themselves as scientific underwriters of the new liberal order emerging in post-feudal Europe. However, the force of Marxist objections rest on alienation being linked to exploitation. In other words, not only am I free to sell my body or labor, but you are also offer whatever price serves to close the sale. However, the sorts of power imbalances which lay behind exploitation can be—and have been—addressed in various ways. Admittedly more work needs to be done, but a time will come when alienation is simply regarded as a radical exercise of freedom—specifically, the freedom to, say, project myself as an avatar in cyberspace or, conversely, convert part of my being to property that can be traded from something that may in turn enhance my being.

9. Robert Nozick paints a possible scenario in Anarchy, State, and Utopia where he describes a “genetic supermarket” where we can choose our genes just as one selects a frozen pizza. Nozick’s scenario implies a world where human characteristics are treated in the way we treat other commercial products. In the Transhuman worldview, is the principle or ultimate value of life commercial?

There is something to that, in the sense that anything that permits discretionary choice will lend itself to commercialization unless the state intervenes—but I believe that the state should intervene and regulate the process. Unfortunately, from a PR standpoint, a hundred years ago that was called ‘eugenics’. Nevertheless, people in the future may need to acquire a license to procreate, and constraints may even be put on the sort of offspring are and are not permissible, and people may even be legally required to undergo periodic forms of medical surveillance—at least as a condition of employment or welfare benefits. (Think Gattaca as a first pass at this world.) It is difficult to see how an advanced democracy that acknowledges already existing persistent inequalities in life-chances could agree to ‘designer babies’ without also imposing the sort of regime that I am suggesting. Would this unduly restrict people’s liberty? Perhaps not, if people will have acquired the more relaxed attitude to alienation, as per my answer to the previous question. However, the elephant in the room—and which I argued in The Proactionary Imperative is more important—is liability. In other words, who is responsible when things go wrong in a regime which encourages people to experiment with risky treatments? This is something that should focus the minds of lawyers and insurers, especially in a world are presumed to be freer per se because they have freer access to information.

10. Is human enhancement consistent with other ways in which people modify their lifestyles, that is, are they analogous in principle to buying a new cell phone, learning a language or working out? Is it a process of acquiring ideas, goods, assets, and experiences that distinguish one person from another, either as an individual or as a member of a community? If not, how is human enhancement different?

‘Human enhancement’, at least as transhumanists understand the phrase, is about ‘morphological freedom’, which I interpret as a form of ultra-alienation. In other words, it’s not simply about people acquiring things, including prosthetic extensions, but also converting themselves to a different form, say, by uploading the contents of one’s brain into a computer. You might say that transhumanism’s sense of ‘human enhancement’ raises the question of whether one can be at once trader and traded in a way that enables the two roles to be maintained indefinitely. Classical political economy seemed to imply this, but Marx denied its ontological possibility.

11. The thrust of 20th Century Western philosophy could be articulated in terms of the strife for possible futures, whether that future be Marxist, Fascist, or other ideologically utopian schemes, and the philosophical fallout of coming to terms with their successes and failures. In our contemporary moment, it appears as if widespread enthusiasm for such futures has disappeared, as the future itself seems as fragmented as our society. H+ is a new, similar effort; but it seems to be a specific evolution of the futurism focused, not on a society, but on the human person (even, specific human persons). Comments?

In terms of how you’ve phrased your question, transhumanism is a recognizably utopian scheme in nearly all respects—including the assumption that everyone would find its proposed future intrinsically attractive, even if people disagree on how or whether it might be achieved. I don’t see transhumanism as so different from capitalism or socialism as pure ideologies in this sense. They all presume their own desirability. This helps to explain why people who don’t agree with the ideology are quickly diagnosed as somehow mentally or morally deficient.

12. A common critique of Heidegger’s thought comes from an ethical turn in Continental philosophy. While Heidegger understands death to the harbinger of meaning, he means specifically and explicitly one’s own death. Levinas, however, maintains that the primary experience of death that does this work is the death of the Other. One’s experience with death comes to one through the death of a loved one, a friend, a known person, or even through the distant reality of a war or famine across the world. In terms of this critique, the question of transhumanism then leads to a socio-ethical concern: if one, using H+ methods, technologies, and enhancements, can significantly inoculate oneself against the threat of death, how ethically (in the Levinasian sense) can one then legitimately live in relation to others in a society, if the threat of the death of the Other no longer provides one the primal experience of the threat of death?

Here I’m closer to Heidegger than Levinas in terms of grounding intuition, but my basic point would be that an understanding of the existence and significance of death is something that can be acquired without undergoing a special sort of experience. Phenomenologically inclined philosophers sometimes seem to assume that a significant experience must happen significantly. But this is not true at all. My main understanding of death as a child came not from people I know dying, but simply from watching the morning news on television and learning about the daily body count from the Vietnam War. That was enough for me to appreciate the gravity of death—even before I started reading the Existentialists.

Editor’s Note:

    The following are elements of syllabi for a graduate, and an undergraduate, course taught by Robert Frodeman in spring 2017 at the University of North Texas. These courses offers an interesting juxtaposition of texts aimed at reimagining how to perform academic philosophy as “field philosophy”. Field philosophy seeks to address meaningfully, and demonstrably, contemporary public debates, regarding transhumanism for example, given attention to shifting ideas and frameworks of both the Humboldtian university and the “new American” university.

Shortlink: http://wp.me/p1Bfg0-3xB

Philosophy 5250: Topics in Philosophy

Overall Theme

This course continues my project of reframing academic philosophy within the approach and problematics of field philosophy.

In terms of philosophic categories, we will be reading classics in 19th and 20th century continental philosophy: Hegel, Nietzsche, and Heidegger. But we will be approaching these texts with an agenda: to look for insights into a contemporary philosophical controversy, the transhumanist debate. This gives us two sets of readings – our three authors, and material from the contemporary debate surrounding transhumanism.

Now, this does not mean that we will restrict our interest in our three authors to what is applicable to the transhumanist debate; our thinking will go wherever our interests take us. But the topic of transhumanism will be primus inter pares.

Readings

  • Hegel, Phenomenology of Spirit, Preface
  • Hegel, The Science of Logic, selections
  • Heidegger, Being and Time, Division 1, Macquarrie translation
  • Heidegger, ‘The Question Concerning Technology’
  • Nietzsche, selections from Thus Spoke Zarathustra and Beyond Good and Evil

Related Readings

Grading

You will have two assignments, both due at the end of the semester. I strongly encourage you to turn in drafts of your papers.

  • A 2500 word paper on a major theme from one of our three authors.
  • A 2500 word paper using our three authors to illuminate your view of the transhumanist challenge.

Philosophy 4750: Philosophy and Public Policy

Overview

This is a course in meta-philosophy. It seeks to develop a philosophy adequate for the 21st century.

Academic philosophy has been captured by a set of categories (ancient, modern, contemporary; ethics, logic, metaphysics, epistemology) that are increasingly dysfunctional for contemporary life. Therefore, this is not merely a course on a specific subject matter (i.e., ‘public policy’) to be added to the rest. Rather, it seeks to question, and philosophize about, the entire knowledge enterprise as it exists today – and to philosophize about the role of philosophy in understanding and perhaps (re)directing the knowledge enterprise.

The course will cover the following themes:

  • The past, present, and future of the university in the Age of Google
  • The end of disciplinarity and the rise of accountability culture
  • The New Republic of Letters and the role of the humanist today
  • The failure of applied philosophy and the development of alternative models

Course Structure

This course is ‘live’: it reflects 20 years of my research on place of philosophy in contemporary society. As such, the course embodies a Humboldtian connection between teaching and research: I am not simply a teacher and a researcher; I’m a teacher-researcher who shares the insights I’m developing with students, testing my thinking in the classroom, and sharing my freshest thoughts. This breaks with the corporate model of education where the professor is an interchangeable cog, teaching the same materials that could be gotten at any university worldwide – while also opening me up to charges of self-indulgence.

Readings

  • Michael M. Crow and William B. Dabars, Designing the New American University
  • Crow chapter in HOI
  • Clark, Academic Charisma
  • Fuller, The Academic Caesar
  • Rudy, The Universities of Europe, 1100-1914
  • Fuller, Sociology of Intellectual Life
  • Smith, Philosophers 6 Types
  • Socrates Tenured: The Institutions of 21st Century Philosophy
  • Plato, The Republic, Book 1

Author Information: Jason M. Pittman, Capitol Technology University, jmpittman@captechu.edu

Pittman, Jason M. “Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 21-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3tZ

Please refer to:

bionic_eye

Image credit: PhOtOnQuAnTiQu, via flickr

Abstract

Zero-knowledge proof serves as the fundamental basis for technological concepts of trust. The most familiar applied solution of technological trust is authentication (human-to-machine and machine-to-machine), most typically a simple password scheme. Further, by extension, much of society-generated knowledge presupposes the immutability of such a proof system when ontologically considering (a) the verification of knowing and (b) the amount of knowledge required to know. In this work, I argue that the zero-knowledge proof underlying technological trust may cease to be viable upon realization of partial transhumanism in the form of embedded nanotechnology. Consequently, existing normative social components of knowledge—chiefly, verification and transmission—may be undermined. In response, I offer recommendations on potential society-centric remedies in partial trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Password based authentication features prominently in daily life. For many us, authentication is a ritual repeated many times on any given day as we enter a username and password into various computing systems. In fact, research (Florêncio & Herley, 2007; Sasse, Steves, Krol, & Chisnell, 2014) revealed that we, on average, enter approximately eight different username and password combinations as many as 23 times a day. The number of times a computing system authenticates to another system is even more frequent. Simply put, authentication is normative in modern, technologically mediated life.

Indeed, authentication has been the normal modality of establishing trust within the context of technology (and, by extension, technology mediated knowledge) for several decades. Over the course of these decades, researchers have uncovered a myriad of flaws in specific manifestations of authentication—weak algorithms, buggy software, or even psychological and cognitive limits of the human mind. Upon closer inspection, one can surmise that the philosophy associated with passwords has not changed. Authentication continues to operate on the fundamental paradigm of a secret, a knowledge-prover, and a knowledge-verifier. The epistemology related to password-based authentication—how the prover establishes possession of the secret such that the verifier can trust the prover without the prover revealing the secret—presents a future problem.

A Partial Transhuman Reality

While some may consider transhumanism to be the province of science fiction, others such as Kurzweil (2005) argue that the merging of Man and Machine is already begun. Of notable interest in this work is partial-transhumanist nanotechnology or, in simple terms, the embedding of microscopic computing systems in our bodies. Such nanotechnology need not be fully autonomous but typically does include some computational sensing ability. The most advanced example are the nanomachines that are used in medicine (Verma, Vijaysingh, & Kushwaha, 2016). Nevertheless, such nanotechnology represents the blueprint for rapid advancement. In fact, research is well underway on using nanomachines (or nanite) for enhanced cognitive computations (Fukushima, 2016).

At the crossroads of partial transhumanism (nanotechnology) and authentication there appears to be a deeper problem. In short, partial-transhumanism may obviate the capacity for a verifier to trust whether a prover, in truth, possesses a secret. Should a verifier not be able to trust a prover, the entirety of authentication may collapse.

Much research does exist that investigates the mathematical basis, the psychological basis, and the technological basis for authentication. There has been little philosophical exploration of authentication. Work such as that of Qureshi, Younus, and Khan (2009) developed a general philosophical overview of password-based authentication but largely focused on developing a philosophical taxonomy to overlay modern password technology. The literature extending Qureshi et al. exclusively builds upon the strictly technical side of password-based authentication, ignoring the philosophical.

Accordingly, the purpose of this work is to describe the concepts directly linked to modern technological trust in authentication and demonstrate how, in a partial transhumanist reality, the concepts of zero-knowledge proof may cease to be viable. Towards this end, I will describe the conceptual framework underlying the operational theme of this work. Then, I explore the abstraction of technological trust as such relates to understanding proof of knowledge. This understanding of where trust fits into normative social epistemology will inform the subsequent description of the problem space. After that, I move on to describe the conceptual architecture of zero-knowledge proofs which serve as the pillars of modern authentication and how transhumanism may adversely impact such. Finally, I will present recommendations on possible society-centric remedies in both partial trans-humanistic as well as full trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Conceptual Framework

Establishing a conceptual framework before delving too far into building the case for trust ceasing to be viable in partial transhumanist reality will permit a deeper understanding of the issue at hand. Such a frame of reference must necessarily include a discussion of how technology inherently mediates our relationship with other humans and technologies. Put another way; technologies are unmistakably involved in human subjectivity while human subjectivity forms the concept of technology (Kiran & Verbeek, 2010). This presupposes a grasp of the technological abstraction though.

Broadly, technology in the context of this work is taken to mean qualitative (abstract) applied science as opposed to practical or quantitative applied of science. This definition follows closely with recent discussions on technology by Scalambrino (2016) and the body of work by Heidegger and Plato. In other words, technology should be understood as those modalities that facilitate progress relative to socially beneficial objectives. In specific, we are concerned with the knowledge modality as opposed to discrete mechanisms, objects, or devices.

What is more, the adjoining of technology, society, and knowledge is a critical element in the conceptual framework for this work. Technology is no longer a single-use, individualized object. Instead, technology is a social arbiter that has grown to be innate to what Idhe (1990) related as a normative human gestalt. While this view is a contrast to views such as offered by Feenberg (1999), the two are not exclusive necessarily.

Further, we must establish the component of our conceptual framework that evidences what it means to verify knowledge. One approach is a scientific model that procedurally quantifies knowledge within a predefined structure. Given the technological nature of this work, such may be inescapable at least as a cognitive bias. More abstractly though, verification of knowledge is conducted by inference whether by the individual or across social collectives. The mechanism of inference, in turn, can be expressed in proof.   Similarly, on inference through proof, another component in our conceptual framework corresponds to the amount of knowledge necessary to demonstrate knowing. As I discuss later, the amount of knowing is either full or limited. That is, proof of knowledge or proof without knowledge.

Technological Trust

The connection between knowledge and trust has a strong history of debate in the social epistemic context. This work is not intended to directly add to the debate surrounding trust. However, recognition of the debate is necessary to develop the bridge connecting trust and zero-knowledge proofs before moving onto zero-knowledge proof and authentication. Further, conceptualizing technological trust permits the construction of a foundation for the central proposition in this work.

To the point, Simon (2013) argued that knowledge relies on trust. McCraw (2015) extended this claim by establishing four components of epistemic trust: belief, communication, reliance, and confidence. These components are further grouped into epistemic (belief and communication) as well as trust (reliance and confidence) conditionals (2015). Trust, in this context, exemplifies the social aspect of knowledge insofar as we do not directly experience trust but hold trust as valid because of the collective position of validity.

Furthermore, Simmel (1978) perceived trust to be integral to society. That is, trust as a knowledge construct, exists in many disciplines and, per Origgi (2004) permeates our cognitive existence. Additionally, there is an argument to be made that, by using technology, we implicitly place trust in such technology (Kiran & Verbeek, 2010). Nonetheless, trust we do.

Certainly, part of such trust is due to the mediation provided by our ubiquitous technology. As well, trust in technology and trust from technology are integral functions of modern social perspectives. On the other hand, we must be cautious in understanding the conditions that lead to technological trust. Work by Idhe (1979; 1990) and others have suggested that technological trust stems from our relation to the technology. Perhaps closer to transhumanism, Levy (1998) offered that such trust is more associated with technology that extends us.

Technology that extends human capacity is a principal abstraction. As well, concomitant to technological trust is knowledge. While the conceptual framework for this work includes verification of knowledge as well as the amount of knowledge necessary to evidence knowing, there is a need to include knowledge proofs in the discourse.

Zero-Knowledge Proof

Proof of knowledge is a logical extension of the discussion of trust. Where trust can be thought as the mechanism through which we allow technology to mediate reality, proof of knowledge is how we come to trust specific forms of technology. In turn, proof of knowledge—specifically, zero-knowledge proof—provides a foundation for trust in technological mediation in the general case and technological authentication in the specific case.

The Nature of Proof

The construct of proof may adopt different meaning depending upon the enveloping context. In the context of this work, we use the operational meaning provided by Pagin (1994). In other words, the proof is established during the process of validating the correctness of a proposition. Furthermore, for any proof to be perceived as valid, such must demonstrate elements of completeness and soundness (Pagin, 1994; 2009).

There is, of course, a larger discourse on the epistemic constraints of proof (Pagin, 1994; Williamson, 2002; Marton, 2006). Such lies outside of the scope of this work however as we are not concerned with can proof be offered for knowledge but rather how proof occurs. In other words, we are interested in the mechanism of proof. Thus, for our purposes, we presuppose that proof of knowledge is possible and is so in through two possible operations: proof with knowledge and proof without knowledge.

Proof with Knowledge

A consequence of typical proof system is that all involved parties gain knowledge. That is, if I know x exists in a specific truth condition, I must present all relevant premises so that you can reach the same conclusion. Thus, the proposition is not only true or false to us both equally but also the means of establishing such truth or falsehood is transparent. This is what can be referred to as proof of knowledge.

In most scenarios, proof with knowledge is a positive mechanism. That is, the parties involved mutually benefit from the outcome. Mathematics and logic are primary examples of this proof state. However, when considering the case of technological trust in the form of authentication proof with knowledge is not desirable.

Proof Without Knowledge

Imagine that you that know that p is true. Further, you wish to demonstrate to me that you know this without revealing how you came to know or what it is exactly that you know. In other words, you wish to keep some aspect of the knowledge secret. I must validate that you know p without gaining any knowledge. This is the second state of proof known as zero-knowledge proof and forms the basis for technological trust in the form of authentication.

Goldwasser, Micali, and Rackoff (1989) defined zero-knowledge proofs as a formal, systematic approach to validating the correctitude of a proposition without communicating additional knowledge. Extra in this context can be taken to imply knowledge other than the proposition itself. An important aspect is that the proposition originates with a verifier entity as opposed to a prover entity. In response to the proposition to be proven, the prover completes an action without revealing any knowledge to the verifier other than the knowledge that the action was completed. If the proposition is probabilistically true, the verifier is satisfied. Note that the verifier and prover entities can be in the form of machine-to-human, human-to-human, or machine-to-machine.

Zero-knowledge proofs are the core of technological trust and, accordingly, authentication. While discrete instances of authentication exist practically outside of the social epistemic purview, the broader theory of authentication is, in fact, a socially collective phenomenon. That is, even in the abstract, authentication is a specific case for technologically mediated trust.

Authentication

The zero-knowledge proof abstraction translates directly into modern authentication modalities. In general, authentication involves a verifier issuing a request to prove knowledge and a prover establishing knowledge by means of a secret to the verifier. Thus, the ability to provide such proof in a manner that is consistent with the verifier request is technologically sufficient to authenticate (Syverson & Cervesato, 2000). However, there are subtleties within the authentication zero-knowledge proof that warrant discussion.

Authentication, or being authenticated, implies two technologically mediated realities. First, the authentication process relies upon the authenticating entity (i.e., the prover) possessing a secret exclusively. The mediated reality for both the verifier and the prover is that to be authenticated implies an identity. In simple terms, I am who I claim to be based on (a) exclusive possession of the secret; and (b) the ability to sufficiently demonstrate such through the zero-knowledge proof to the verifier. Likewise, the verifier is identified to the prover.

Secondly, authentication establishes a general right of access for the verifier based on, again, possession of an exclusive secret. Consequently, there is a technological mediation of what objects are available to the verifier once authenticated (i.e., all authorized objects) or not authenticated (i.e., no objects). Thus, the zero-knowledge proof is a mechanism of associating the prover’s identity with a set of objects in the world and facilitating access to those objects. That is to say, once authenticated, the identity has operational control within corresponding space over linked objects.

Normatively, authentication is a socially collective phenomenon despite individual authentication relying upon exclusive zero-knowledge proof (Van Der Meyden & Wilke, 2007). Principally, authentication is a means of interacting with other humans, technology, and society at large while maintaining trust. However, if authentication is a manifestation of technological trust, one must wonder if transhumanism may affect the zero-knowledge proof abstraction.

Transhumanism

More (1990) described transhumanism as a philosophy that embraces the profound changes to society and the individual brought about by science and technology. There is strong debate as to when such change will occur although most futurists argue that technology has already begun to transcend the breaking point of explosive growth. Technology in this context aligns with the conceptual framework of this work. As well, there is an agreement in the philosophical literature with the idea of such technological expansion (Bostrom, 1998; More, 2013).

Furthermore, transhumanism exists in two forms: partial transhumanism and full transhumanism (Kurzweil, 2005). This work is concerned with partial transhumanism exclusively. Furthermore, partial transhumanism is inclusive of three modalities. According to Kurzweil (2005), these modalities are (a) technology sufficient to manipulate human life genetically; (b) nanotechnology; and (c) robotics. In the context of this work, I am interested in the potentiality of nanotechnology.

Briefly, nanotechnology exists in several forms. The form central to this work involves embedding microscopic machines within human biology. These machines can perform any number of operations, including augmenting existing bodily systems. Along these lines, Vinge (1993) argued that a by-product of technological expansion will be the monumental increase in human intelligence. Although there are a variety of mechanisms by which technology will amplify raw brainpower, nanotechnology is a forerunner in the mind of Kurzweil and others.

What is more, the computational power of nanites is measurable and predictable (Chau, et al., 2005; Bhore, 2016). The amount of human intellectual capacity projected to result from nanotechnology may be sufficient to impart hyper-cognitive or even extrasensory abilities. With such augmentation, the human mind will be capable of computational decision-making well beyond existing technology.

While the notion of nanites embedded in our bodies, augmenting various biomechanical systems to the point of precognitive awareness of zero-knowledge proof verification, may strike some as science fiction, there is growing precedent. Existing research in the field of medicine demonstrates that at least partially autonomous nanites have a grounding in reality (Huilgol & Hede, 2006; Das et al., 2007; Murray, Siegel, Stein, & Wright, 2009). Thus, envisioning a near future where more powerful and autonomous nanites are available is not difficult.

Technological Trust in Authentication

The purpose of this work was to describe technological trust in authentication and demonstrate how, in a future partial transhumanist reality, the concepts of zero-knowledge proof will cease to be viable. Towards that end, I examined technological trust in the context of how and why such trust is established. Further, knowledge proofs were discussed with an emphasis on proofs without knowledge. Such led to an overview of authentication and, subsequently, transhumanism.

Based on the analysis so far, the technological trust afforded by such proof appears to be no longer feasible once embedded nanotechnology is introduced into humans. Nanite augmented cognition will result in the capability for a knowledge-prover to, on demand, compute knowledge sufficient to convince a knowledge-verifier. Outright, such a reality breaks the latent assumptions that operationalize the conceptual framework into related technology. That is, once the knowledge-verifier cannot trust that the knowledge is known by the prover, a significant future problem arises.

Unfortunately, the fields of computer science and computer engineering do not historically plan for paradigm shifting innovations well. Such is exacerbated when the paradigm shift has rapid onset after a long ramp-up time as is the case with the technological singularity. More specifically, partial transhumanism as considered in this work may have unforeseen effects beyond the scope of the fields that created the technology in the first place. The inability to handle rapid shifts is largely related to these fields posing what is type questions.

Similarly, the Collingridge dilemma tells us that, “…the social consequences of a technology cannot be predicated early in the life of the technology” (1980, p. 11). Thus, adequate preparation for the eventual collapse of zero-knowledge proof requires asking what ought to be. Such a question is a philosophical question. As it stands, recognition of social epistemology as an interdisciplinary field already exists (Froehlich, 1989; Fuller, 2005; Zins, 2006). More still, there is a precedent for philosophy informing the science of technology (Scalambrino, 2016) and assembling the foundation of future looking paradigm shifts.

Accordingly, a recommendation is for social epistemologists and technologists to jointly examine modifications to the abstract zero-knowledge proof such that the proof is resilient to nanite-powered knowledge computation. In conjunction, there may be a benefit in attempting to conceive of a replacement proof system that also harnesses partial-transhumanism for the knowledge-verifier in a manner commensurate with any increase in capacity for the knowledge-prover. Lastly, a joint effort may be able to envision a technologically mediated construct that does not require proof without knowledge at all.

References

Bhore, Pratik Rajan “A Survey of Nanorobotics Technology.” International Journal of Computer Science & Engineering Technology 7, no. 9 (2016): 415-422.

Bostrom, Nick. Predictions from Philosophy? How Philosophers Could Make Themselves Useful. (1998). http://www.nickbostrom.com/old/predict.html

Chau, Robert, Suman Datta, Mark Doczy, Brian Doyle, Ben Jin, Jack Kavalieros, Amlan Majumdar, Matthew Metz and Marko Radosavljevic. “Benchmarking Nanotechnology for High-Performance and Low-Power Logic Transistor Applications.” IEEE Transactions on Nanotechnology 4, no. 2 (2005): 153-158.

Collingridge, David. The Social Control of Technology. New York: St. Martin’s Press, 1980.

Das, Shamik, Alexander J. Gates, Hassen A. Abdu, Garrett S. Rose, Carl A. Picconatto, and James C. Ellenbogen “Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.” IEEE Transactions on Circuits and Systems I: Regular Papers 54, no. 11 (2007): 2528–2540.

Feenberg, Andew. Questioning Technology. London: Routledge, 1999.

Florencio, Dinei and Cormac Herley. “A Large-Scale Study of Web Password Habits.” In WWW 07 Proceedings of the 16th International Conference on World Wide Web. 657-666.

Froehlich, Thomas J. “The Foundations of Information Science in Social Epistemology.”  In System Sciences, 1989. Vol. IV: Emerging Technologies and Applications Track, Proceedings of the Twenty-Second Annual Hawaii International Conference, 4 (1989): 306-314.

Fukushima, Masato. “Blade Runner and Memory Devices: Reconsidering the Interrelations between the Body, Technology, and Enhancement.” East Asian Science, Technology and Society 10, no. 1 (2016): 73-91.

Fuller, Steve. “Social Epistemology: Preserving the Integrity of Knowledge About Knowledge.” In Handbook on the Knowledge Economy, edited by David Rooney, Greg Hearn and Abraham Ninan, 67-79. Cheltenham, UK: Edward Elgar, 2005.

Goldwasser, Shafi, Silvio M. Micali and Charles Rackoff. “The Knowledge Complexity of Interactive Proof Systems.” SIAM Journal on Computing 18, no. 1 (1989): 186-208.

Huilgol, Nagraj and Shantesh Hede. “ ‘Nano’: The New Nemesis of Cancer.” Journal of Cancer Research and Therapeutics 2, no. 4 (2006): 186–95.

Ihde, Don. Technics and Praxis. Dordrecht: Reidel, 1979.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth. Bloomington: Indiana University Press, 1990.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Penguin Books. 2005.

Lévy, Pierre. Becoming Virtual. Reality in the Digital Age. New York: Plenum Trade, 1998.

Marton, Pierre. “Verificationists Versus Realists: The Battle Over Knowability. Synthese 151, no. 1 (2006): 81-98.

More, Max. “Transhumanism: Towards a Futurist Philosophy.” Extropy, 6 (1990): 6-12.

More, Max. (2013) The philosophy of transhumanism, In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (eds M. More and N. Vita-More), John Wiley & Sons, Oxford. doi: 10.1002/9781118555927.ch1

Murday, J. S.; Siegel, R. W.; Stein, J.; Wright, J. F. (2009). Translational nanomedicine: Status assessment and opportunities. Nanomedicine: Nanotechnology, Biology and Medicine, 5(3). 251–273. doi:10.1016/j.nano.2009.06.001

Origgi, Gloria. “Is Trust an Epistemological Notion?” Episteme 1, no. 1 (2004): 61-72.

Pagin, Peter. “Knowledge of Proofs.” Topoi 13, no. 2 (1994): 93-100.

Pagin, Peter. “Compositionality, Understanding, and Proofs. Mind 118, no. 471 (2009): 713-737.

Qureshi, M. Atif, Arjumand Younus and Arslan Ahmed Khan Khan. “Philosophical Survey of Passwords.” International Journal of Computer Science Issues 1 (2009): 8-12.

Sasse, M. Angela, Michelle Steves, Kat Krol, and Dana Chisnell. “The Great Authentication Fatigue – And How To Overcome It.” In Cross-Cultural Design, edited by PLP Rau, 6th International Conference, CCD 2014 Held as Part of HCI International 2014 Heraklion, Crete, Greece, June 22-27, 2014: Proceedings, 228-239. Springer International Publishing: Cham, Switzerland.

Scalambrino, Frank. Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation. London New York: Rowman & Littlefield International, 2016.

Simmel, Georg.  The Philosophy of Money. London: Routledge and Kegan Paul, 1978.

Simon, Judith. “Trust, Knowledge and Responsibility in Socio-Technical Systems.” University of Vienna and Karlsruhe Institute of Technology, 2013. https://www.iiia.csic.es/en/seminary/trust-knowledge-and-responsibility-socio-technical-systems

Syverson, Paul and Iliano Cervesato. “The Logic of Authentication Protocols.” In Proceeding FOSAD ’00 Revised versions of lectures given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design on Foundations of Security Analysis and Design: Tutorial Lectures, 63-136. London: Springer-Verlag, 2001.

Williamson, Timothy. Knowledge and its Limits. Oxford University Press on Demand, 2002.

Van Der Meyden, Ron and Thomas Wilke. “Preservation of Epistemic Properties in Security Protocol Implementations.” In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge (2007): 212-221.

Vinge, Verner. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio March 30-31, 1993, NASA Conference Publication 10129 (1993): 11-22,

Verma, S., K. Vijaysingh and R. Kushwaha. “Nanotechnology: A Review.” In Proceedings of the Emerging Trends in Engineering & Management for Sustainable Development, Jaipur, India, 19–20 February 2016.

Zins, Chaim. “Redefining Information Science: From ‘Information Science’ to ‘Knowledge Science’.” Journal of Documentation 62, no. 4, (2006). 447-461.

Justin Cruickshank at the University of Birmingham was kind enough to alert me to Steve Fuller’s talk “Transhumanism and the Future of Capitalism”—held by The Philosophy of Technology Research Group—on 11 January 2017.

Author Information: Mark Shiffman, Villanova University, mark.shiffman@villanova.edu

Shiffman, Mark. “Real Alternatives on Decisive Issues: A Response to Alcibiades Malapi-Nelson.” Social Epistemology Review and Reply Collective 5, no. 4 (2016): 52-55.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2U9

Please refer to:

blue_moon

Image credit: NASA Goddard Space Flight Center, via flickr

My thanks to Dr. Malapi-Nelson for his attention (2016) to my article (2015) and some very kind words he had for it. As a part-time classicist and Socratic philosopher, it is of course an unusual delight to be criticized by an Alcibiades. I am put in mind of Plutarch’s life of that flamboyant character, which seems to suggest that Socrates made Alcibiades less destructive by making him realize that his hyperbolic desires were inherently insatiable, thus reigning in his tyrannical impulses by rendering him incapable of taking his political aims too seriously. There may be some analogy to the effect I would like to have on the extravagant fantasies of transhumanism, with their potential for destroying humane limits in the name of an infinite dissatisfaction with given reality. (I think Bob Frodeman and I are pulling together on this, however mismatched a pair of draft animals we may otherwise be.)  Continue Reading…

Author Information: William T. Lynch, Wayne State University, William.Lynch@wayne.edu

Lynch, William T. “Darwinian Social Epistemology: Science and Religion as Evolutionary Byproducts Subject to Cultural Evolution.” Social Epistemology Review and Reply Collective 5, no. 2 (2016): 26-68.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2Ci

Dawn

Image credit: Susanne Nilsson, via flickr

Abstract

Key to Steve Fuller’s recent defense of intelligent design is the claim that it alone can explain why science is even possible. By contrast, Fuller argues that Darwinian evolutionary theory posits a purposeless universe which leaves humans with no motivation to study science and no basis for modifying an underlying reality. I argue that this view represents a retreat from insights about knowledge within Fuller’s own program of social epistemology. I show that a Darwinian picture of science, as also of religion, can be constructed that explains how these complex social institutions emerged out of a process of biological and cultural evolution. Science and religion repurpose aspects of our evolutionary inheritance to the new circumstances of more complex societies that have emerged since the Neolithic revolution.  Continue Reading…

Author Information: Alcibiades Malapi-Nelson, York University, alci.malapi@outlook.com

Malapi-Nelson, Alcibiades . “Transhumanism, Christianity and Modern Science: Some Clarifying Points Regarding Shiffman’s Criticism of Fuller.” Social Epistemology Review and Reply Collective 5, no. 2 (2016): 1-5.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2Ah

voyage_alchimique

Image credit: ImAges ImprObables, via flickr

Mark Shiffman recently published a review of Steve Fuller’s The Proactionary Imperative in the Journal of Religion and Public Life First Things (“Humanity 4.5”, Nov. 2015). While the main synopsis of Fuller’s argument regarding tranhumanism seems fair and accurate, there are a number of points where the author likely does not entirely get Fuller’s views within a broader context—namely, that of Fuller’s previous work. Also, Shiffman does not clarify features of his own theoretical context that later trigger some amount of confusion.  Continue Reading…

Author Information: Gregory Sandstrom, European Humanities University, gregory.sandstrom@ehu.lt

Sandstrom, Gregory. “Steve Fuller’s False Hope in IDism: The Discovery Institute’s Anti-Transhumanism.” Social Epistemology Review and Reply Collective 4, no. 10 (2015): 1-7.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2kz

Please refer to:

human_morph

Image credit: Provided by Gregory Sandstrom (source unknown)

“I’m not machine. I’m not man. I’m more.” — John Connor (Terminator Genisys 2015)

While I have been gradually working on a couple of other articles related to SERRC posts (Frodeman 2015 and Eglash 2015) that challenge Steve Fuller’s embrace of ‘Intelligent Design’[1] (ID), this one is the easiest to finish due to the starkness of the problem. The Discovery Institute (DI), home of the Intelligent Design Movement (IDM), has been beating its anti-trans-humanism PR drum in recent years. Fuller, on the other hand, has made pro-trans-humanism into one of the main topics of his recent work, indeed calling it now a “full-blown ideology” in his and Lipinska’s The Proactionary Imperative (2014, v).  Continue Reading…

Author Information: Robert Frodeman, University of North Texas, Robert.Frodeman@unt.edu

Frodeman, Robert. “Anti-Fuller: Transhumanism and the Proactionary Imperative.” Social Epistemology Review and Reply Collective 4, no. 4 (2015): 38-43.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Zr

Please refer to:

victorian_transhuman

Image credit: Dave Mathis, via flickr

Academics suffer from a type of déformation professionnelle: we believe that across the long arc of history that ideas get their due. Our efforts are premised on the assumption that the best argument and deepest thinker will eventually be recognized.

Steve Fuller offers an interesting case in point. Few academics are as dedicated to the academic enterprise. His scholarship is prodigious, drawing from a wide range of historical and disciplinary sources. He publishes like crazy. Yet, despite its depth and verve, Fuller’s work has not gotten the notice it deserves— the attention, say, lavished on the Latours and Bourdieus of the world. Why? Besides accident, and the lack of a French accent, I see two factors at work.  Continue Reading…

Author Information: William Davis, Virginia Tech, widavis@vt.edu<

Davis, William. “Moving Beyond the Human: Posthumanism, Transhumanism and Objects.” Social Epistemology Review and Reply Collective 4, no. 3 (2015): 9-14.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Vr

Please refer to:

post_and_transhumanism

Post- and Transhumanism: An Introduction
Edited by Robert Ranisch and Stefan Lorenz Sorgner
Peter Lang GmbH, Internationaler Verlag der Wissenschaften
313 pp.

We must learn to ignore the definitive shapes of humans, and of the nonhumans with which we share more and more of our existence. The blur that we would then perceive, the swapping of properties, is a characteristic of our premodern past, in the good old days of poesis, and a characteristic of our modern and nonmodern present as well (Latour 1994, 42).

Introduction

First, a confession: I am a late arrival to discussions of posthumanism and transhumanism. In my own work in philosophy of technology, I have struggled to find the direction I think philosophy of technology should take regarding fundamental philosophical positions pertaining to ontology, epistemology and ethics. In that sense, Post- and Transhumanism: An Introduction (2014) has served as a useful entry into contemporary discussion of what exists, how we can (and should) go about enquiring after those things that exist, and how we should conceive of ethics in a world inhabited, seemingly equally, by humans and non-humans (or, we might posit, unequally inhabited: there are far more non-humans than humans in this universe). What follows, then, could be fairly called an “unfamiliar” or “uninitiated” review of Robert Ranisch and Stefan Lorenz Sorgner’s edited text. Perhaps as a testament to the persuasive strategies and flair of the varied contributors to this edited text, I find myself quickly taking sides between posthumanism and transhumanism, only to have that position challenged by the next entry. In the process, my ontological and ethical views have undergone contestation and transformation.  Continue Reading…