Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication, Jason M. Pittman

Author Information: Jason M. Pittman, Capitol Technology University, jmpittman@captechu.edu

Pittman, Jason M. “Trust and Transhumanism: An Analysis of the Boundaries of Zero-Knowledge Proof and Technologically Mediated Authentication.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 21-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3tZ

Please refer to:

bionic_eye

Image credit: PhOtOnQuAnTiQu, via flickr

Abstract

Zero-knowledge proof serves as the fundamental basis for technological concepts of trust. The most familiar applied solution of technological trust is authentication (human-to-machine and machine-to-machine), most typically a simple password scheme. Further, by extension, much of society-generated knowledge presupposes the immutability of such a proof system when ontologically considering (a) the verification of knowing and (b) the amount of knowledge required to know. In this work, I argue that the zero-knowledge proof underlying technological trust may cease to be viable upon realization of partial transhumanism in the form of embedded nanotechnology. Consequently, existing normative social components of knowledge—chiefly, verification and transmission—may be undermined. In response, I offer recommendations on potential society-centric remedies in partial trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Password based authentication features prominently in daily life. For many us, authentication is a ritual repeated many times on any given day as we enter a username and password into various computing systems. In fact, research (Florêncio & Herley, 2007; Sasse, Steves, Krol, & Chisnell, 2014) revealed that we, on average, enter approximately eight different username and password combinations as many as 23 times a day. The number of times a computing system authenticates to another system is even more frequent. Simply put, authentication is normative in modern, technologically mediated life.

Indeed, authentication has been the normal modality of establishing trust within the context of technology (and, by extension, technology mediated knowledge) for several decades. Over the course of these decades, researchers have uncovered a myriad of flaws in specific manifestations of authentication—weak algorithms, buggy software, or even psychological and cognitive limits of the human mind. Upon closer inspection, one can surmise that the philosophy associated with passwords has not changed. Authentication continues to operate on the fundamental paradigm of a secret, a knowledge-prover, and a knowledge-verifier. The epistemology related to password-based authentication—how the prover establishes possession of the secret such that the verifier can trust the prover without the prover revealing the secret—presents a future problem.

A Partial Transhuman Reality

While some may consider transhumanism to be the province of science fiction, others such as Kurzweil (2005) argue that the merging of Man and Machine is already begun. Of notable interest in this work is partial-transhumanist nanotechnology or, in simple terms, the embedding of microscopic computing systems in our bodies. Such nanotechnology need not be fully autonomous but typically does include some computational sensing ability. The most advanced example are the nanomachines that are used in medicine (Verma, Vijaysingh, & Kushwaha, 2016). Nevertheless, such nanotechnology represents the blueprint for rapid advancement. In fact, research is well underway on using nanomachines (or nanite) for enhanced cognitive computations (Fukushima, 2016).

At the crossroads of partial transhumanism (nanotechnology) and authentication there appears to be a deeper problem. In short, partial-transhumanism may obviate the capacity for a verifier to trust whether a prover, in truth, possesses a secret. Should a verifier not be able to trust a prover, the entirety of authentication may collapse.

Much research does exist that investigates the mathematical basis, the psychological basis, and the technological basis for authentication. There has been little philosophical exploration of authentication. Work such as that of Qureshi, Younus, and Khan (2009) developed a general philosophical overview of password-based authentication but largely focused on developing a philosophical taxonomy to overlay modern password technology. The literature extending Qureshi et al. exclusively builds upon the strictly technical side of password-based authentication, ignoring the philosophical.

Accordingly, the purpose of this work is to describe the concepts directly linked to modern technological trust in authentication and demonstrate how, in a partial transhumanist reality, the concepts of zero-knowledge proof may cease to be viable. Towards this end, I will describe the conceptual framework underlying the operational theme of this work. Then, I explore the abstraction of technological trust as such relates to understanding proof of knowledge. This understanding of where trust fits into normative social epistemology will inform the subsequent description of the problem space. After that, I move on to describe the conceptual architecture of zero-knowledge proofs which serve as the pillars of modern authentication and how transhumanism may adversely impact such. Finally, I will present recommendations on possible society-centric remedies in both partial trans-humanistic as well as full trans-humanistic technologically mediated realities with the goal of preserving technological trust.

Conceptual Framework

Establishing a conceptual framework before delving too far into building the case for trust ceasing to be viable in partial transhumanist reality will permit a deeper understanding of the issue at hand. Such a frame of reference must necessarily include a discussion of how technology inherently mediates our relationship with other humans and technologies. Put another way; technologies are unmistakably involved in human subjectivity while human subjectivity forms the concept of technology (Kiran & Verbeek, 2010). This presupposes a grasp of the technological abstraction though.

Broadly, technology in the context of this work is taken to mean qualitative (abstract) applied science as opposed to practical or quantitative applied of science. This definition follows closely with recent discussions on technology by Scalambrino (2016) and the body of work by Heidegger and Plato. In other words, technology should be understood as those modalities that facilitate progress relative to socially beneficial objectives. In specific, we are concerned with the knowledge modality as opposed to discrete mechanisms, objects, or devices.

What is more, the adjoining of technology, society, and knowledge is a critical element in the conceptual framework for this work. Technology is no longer a single-use, individualized object. Instead, technology is a social arbiter that has grown to be innate to what Idhe (1990) related as a normative human gestalt. While this view is a contrast to views such as offered by Feenberg (1999), the two are not exclusive necessarily.

Further, we must establish the component of our conceptual framework that evidences what it means to verify knowledge. One approach is a scientific model that procedurally quantifies knowledge within a predefined structure. Given the technological nature of this work, such may be inescapable at least as a cognitive bias. More abstractly though, verification of knowledge is conducted by inference whether by the individual or across social collectives. The mechanism of inference, in turn, can be expressed in proof.   Similarly, on inference through proof, another component in our conceptual framework corresponds to the amount of knowledge necessary to demonstrate knowing. As I discuss later, the amount of knowing is either full or limited. That is, proof of knowledge or proof without knowledge.

Technological Trust

The connection between knowledge and trust has a strong history of debate in the social epistemic context. This work is not intended to directly add to the debate surrounding trust. However, recognition of the debate is necessary to develop the bridge connecting trust and zero-knowledge proofs before moving onto zero-knowledge proof and authentication. Further, conceptualizing technological trust permits the construction of a foundation for the central proposition in this work.

To the point, Simon (2013) argued that knowledge relies on trust. McCraw (2015) extended this claim by establishing four components of epistemic trust: belief, communication, reliance, and confidence. These components are further grouped into epistemic (belief and communication) as well as trust (reliance and confidence) conditionals (2015). Trust, in this context, exemplifies the social aspect of knowledge insofar as we do not directly experience trust but hold trust as valid because of the collective position of validity.

Furthermore, Simmel (1978) perceived trust to be integral to society. That is, trust as a knowledge construct, exists in many disciplines and, per Origgi (2004) permeates our cognitive existence. Additionally, there is an argument to be made that, by using technology, we implicitly place trust in such technology (Kiran & Verbeek, 2010). Nonetheless, trust we do.

Certainly, part of such trust is due to the mediation provided by our ubiquitous technology. As well, trust in technology and trust from technology are integral functions of modern social perspectives. On the other hand, we must be cautious in understanding the conditions that lead to technological trust. Work by Idhe (1979; 1990) and others have suggested that technological trust stems from our relation to the technology. Perhaps closer to transhumanism, Levy (1998) offered that such trust is more associated with technology that extends us.

Technology that extends human capacity is a principal abstraction. As well, concomitant to technological trust is knowledge. While the conceptual framework for this work includes verification of knowledge as well as the amount of knowledge necessary to evidence knowing, there is a need to include knowledge proofs in the discourse.

Zero-Knowledge Proof

Proof of knowledge is a logical extension of the discussion of trust. Where trust can be thought as the mechanism through which we allow technology to mediate reality, proof of knowledge is how we come to trust specific forms of technology. In turn, proof of knowledge—specifically, zero-knowledge proof—provides a foundation for trust in technological mediation in the general case and technological authentication in the specific case.

The Nature of Proof

The construct of proof may adopt different meaning depending upon the enveloping context. In the context of this work, we use the operational meaning provided by Pagin (1994). In other words, the proof is established during the process of validating the correctness of a proposition. Furthermore, for any proof to be perceived as valid, such must demonstrate elements of completeness and soundness (Pagin, 1994; 2009).

There is, of course, a larger discourse on the epistemic constraints of proof (Pagin, 1994; Williamson, 2002; Marton, 2006). Such lies outside of the scope of this work however as we are not concerned with can proof be offered for knowledge but rather how proof occurs. In other words, we are interested in the mechanism of proof. Thus, for our purposes, we presuppose that proof of knowledge is possible and is so in through two possible operations: proof with knowledge and proof without knowledge.

Proof with Knowledge

A consequence of typical proof system is that all involved parties gain knowledge. That is, if I know x exists in a specific truth condition, I must present all relevant premises so that you can reach the same conclusion. Thus, the proposition is not only true or false to us both equally but also the means of establishing such truth or falsehood is transparent. This is what can be referred to as proof of knowledge.

In most scenarios, proof with knowledge is a positive mechanism. That is, the parties involved mutually benefit from the outcome. Mathematics and logic are primary examples of this proof state. However, when considering the case of technological trust in the form of authentication proof with knowledge is not desirable.

Proof Without Knowledge

Imagine that you that know that p is true. Further, you wish to demonstrate to me that you know this without revealing how you came to know or what it is exactly that you know. In other words, you wish to keep some aspect of the knowledge secret. I must validate that you know p without gaining any knowledge. This is the second state of proof known as zero-knowledge proof and forms the basis for technological trust in the form of authentication.

Goldwasser, Micali, and Rackoff (1989) defined zero-knowledge proofs as a formal, systematic approach to validating the correctitude of a proposition without communicating additional knowledge. Extra in this context can be taken to imply knowledge other than the proposition itself. An important aspect is that the proposition originates with a verifier entity as opposed to a prover entity. In response to the proposition to be proven, the prover completes an action without revealing any knowledge to the verifier other than the knowledge that the action was completed. If the proposition is probabilistically true, the verifier is satisfied. Note that the verifier and prover entities can be in the form of machine-to-human, human-to-human, or machine-to-machine.

Zero-knowledge proofs are the core of technological trust and, accordingly, authentication. While discrete instances of authentication exist practically outside of the social epistemic purview, the broader theory of authentication is, in fact, a socially collective phenomenon. That is, even in the abstract, authentication is a specific case for technologically mediated trust.

Authentication

The zero-knowledge proof abstraction translates directly into modern authentication modalities. In general, authentication involves a verifier issuing a request to prove knowledge and a prover establishing knowledge by means of a secret to the verifier. Thus, the ability to provide such proof in a manner that is consistent with the verifier request is technologically sufficient to authenticate (Syverson & Cervesato, 2000). However, there are subtleties within the authentication zero-knowledge proof that warrant discussion.

Authentication, or being authenticated, implies two technologically mediated realities. First, the authentication process relies upon the authenticating entity (i.e., the prover) possessing a secret exclusively. The mediated reality for both the verifier and the prover is that to be authenticated implies an identity. In simple terms, I am who I claim to be based on (a) exclusive possession of the secret; and (b) the ability to sufficiently demonstrate such through the zero-knowledge proof to the verifier. Likewise, the verifier is identified to the prover.

Secondly, authentication establishes a general right of access for the verifier based on, again, possession of an exclusive secret. Consequently, there is a technological mediation of what objects are available to the verifier once authenticated (i.e., all authorized objects) or not authenticated (i.e., no objects). Thus, the zero-knowledge proof is a mechanism of associating the prover’s identity with a set of objects in the world and facilitating access to those objects. That is to say, once authenticated, the identity has operational control within corresponding space over linked objects.

Normatively, authentication is a socially collective phenomenon despite individual authentication relying upon exclusive zero-knowledge proof (Van Der Meyden & Wilke, 2007). Principally, authentication is a means of interacting with other humans, technology, and society at large while maintaining trust. However, if authentication is a manifestation of technological trust, one must wonder if transhumanism may affect the zero-knowledge proof abstraction.

Transhumanism

More (1990) described transhumanism as a philosophy that embraces the profound changes to society and the individual brought about by science and technology. There is strong debate as to when such change will occur although most futurists argue that technology has already begun to transcend the breaking point of explosive growth. Technology in this context aligns with the conceptual framework of this work. As well, there is an agreement in the philosophical literature with the idea of such technological expansion (Bostrom, 1998; More, 2013).

Furthermore, transhumanism exists in two forms: partial transhumanism and full transhumanism (Kurzweil, 2005). This work is concerned with partial transhumanism exclusively. Furthermore, partial transhumanism is inclusive of three modalities. According to Kurzweil (2005), these modalities are (a) technology sufficient to manipulate human life genetically; (b) nanotechnology; and (c) robotics. In the context of this work, I am interested in the potentiality of nanotechnology.

Briefly, nanotechnology exists in several forms. The form central to this work involves embedding microscopic machines within human biology. These machines can perform any number of operations, including augmenting existing bodily systems. Along these lines, Vinge (1993) argued that a by-product of technological expansion will be the monumental increase in human intelligence. Although there are a variety of mechanisms by which technology will amplify raw brainpower, nanotechnology is a forerunner in the mind of Kurzweil and others.

What is more, the computational power of nanites is measurable and predictable (Chau, et al., 2005; Bhore, 2016). The amount of human intellectual capacity projected to result from nanotechnology may be sufficient to impart hyper-cognitive or even extrasensory abilities. With such augmentation, the human mind will be capable of computational decision-making well beyond existing technology.

While the notion of nanites embedded in our bodies, augmenting various biomechanical systems to the point of precognitive awareness of zero-knowledge proof verification, may strike some as science fiction, there is growing precedent. Existing research in the field of medicine demonstrates that at least partially autonomous nanites have a grounding in reality (Huilgol & Hede, 2006; Das et al., 2007; Murray, Siegel, Stein, & Wright, 2009). Thus, envisioning a near future where more powerful and autonomous nanites are available is not difficult.

Technological Trust in Authentication

The purpose of this work was to describe technological trust in authentication and demonstrate how, in a future partial transhumanist reality, the concepts of zero-knowledge proof will cease to be viable. Towards that end, I examined technological trust in the context of how and why such trust is established. Further, knowledge proofs were discussed with an emphasis on proofs without knowledge. Such led to an overview of authentication and, subsequently, transhumanism.

Based on the analysis so far, the technological trust afforded by such proof appears to be no longer feasible once embedded nanotechnology is introduced into humans. Nanite augmented cognition will result in the capability for a knowledge-prover to, on demand, compute knowledge sufficient to convince a knowledge-verifier. Outright, such a reality breaks the latent assumptions that operationalize the conceptual framework into related technology. That is, once the knowledge-verifier cannot trust that the knowledge is known by the prover, a significant future problem arises.

Unfortunately, the fields of computer science and computer engineering do not historically plan for paradigm shifting innovations well. Such is exacerbated when the paradigm shift has rapid onset after a long ramp-up time as is the case with the technological singularity. More specifically, partial transhumanism as considered in this work may have unforeseen effects beyond the scope of the fields that created the technology in the first place. The inability to handle rapid shifts is largely related to these fields posing what is type questions.

Similarly, the Collingridge dilemma tells us that, “…the social consequences of a technology cannot be predicated early in the life of the technology” (1980, p. 11). Thus, adequate preparation for the eventual collapse of zero-knowledge proof requires asking what ought to be. Such a question is a philosophical question. As it stands, recognition of social epistemology as an interdisciplinary field already exists (Froehlich, 1989; Fuller, 2005; Zins, 2006). More still, there is a precedent for philosophy informing the science of technology (Scalambrino, 2016) and assembling the foundation of future looking paradigm shifts.

Accordingly, a recommendation is for social epistemologists and technologists to jointly examine modifications to the abstract zero-knowledge proof such that the proof is resilient to nanite-powered knowledge computation. In conjunction, there may be a benefit in attempting to conceive of a replacement proof system that also harnesses partial-transhumanism for the knowledge-verifier in a manner commensurate with any increase in capacity for the knowledge-prover. Lastly, a joint effort may be able to envision a technologically mediated construct that does not require proof without knowledge at all.

References

Bhore, Pratik Rajan “A Survey of Nanorobotics Technology.” International Journal of Computer Science & Engineering Technology 7, no. 9 (2016): 415-422.

Bostrom, Nick. Predictions from Philosophy? How Philosophers Could Make Themselves Useful. (1998). http://www.nickbostrom.com/old/predict.html

Chau, Robert, Suman Datta, Mark Doczy, Brian Doyle, Ben Jin, Jack Kavalieros, Amlan Majumdar, Matthew Metz and Marko Radosavljevic. “Benchmarking Nanotechnology for High-Performance and Low-Power Logic Transistor Applications.” IEEE Transactions on Nanotechnology 4, no. 2 (2005): 153-158.

Collingridge, David. The Social Control of Technology. New York: St. Martin’s Press, 1980.

Das, Shamik, Alexander J. Gates, Hassen A. Abdu, Garrett S. Rose, Carl A. Picconatto, and James C. Ellenbogen “Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.” IEEE Transactions on Circuits and Systems I: Regular Papers 54, no. 11 (2007): 2528–2540.

Feenberg, Andew. Questioning Technology. London: Routledge, 1999.

Florencio, Dinei and Cormac Herley. “A Large-Scale Study of Web Password Habits.” In WWW 07 Proceedings of the 16th International Conference on World Wide Web. 657-666.

Froehlich, Thomas J. “The Foundations of Information Science in Social Epistemology.”  In System Sciences, 1989. Vol. IV: Emerging Technologies and Applications Track, Proceedings of the Twenty-Second Annual Hawaii International Conference, 4 (1989): 306-314.

Fukushima, Masato. “Blade Runner and Memory Devices: Reconsidering the Interrelations between the Body, Technology, and Enhancement.” East Asian Science, Technology and Society 10, no. 1 (2016): 73-91.

Fuller, Steve. “Social Epistemology: Preserving the Integrity of Knowledge About Knowledge.” In Handbook on the Knowledge Economy, edited by David Rooney, Greg Hearn and Abraham Ninan, 67-79. Cheltenham, UK: Edward Elgar, 2005.

Goldwasser, Shafi, Silvio M. Micali and Charles Rackoff. “The Knowledge Complexity of Interactive Proof Systems.” SIAM Journal on Computing 18, no. 1 (1989): 186-208.

Huilgol, Nagraj and Shantesh Hede. “ ‘Nano’: The New Nemesis of Cancer.” Journal of Cancer Research and Therapeutics 2, no. 4 (2006): 186–95.

Ihde, Don. Technics and Praxis. Dordrecht: Reidel, 1979.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth. Bloomington: Indiana University Press, 1990.

Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Penguin Books. 2005.

Lévy, Pierre. Becoming Virtual. Reality in the Digital Age. New York: Plenum Trade, 1998.

Marton, Pierre. “Verificationists Versus Realists: The Battle Over Knowability. Synthese 151, no. 1 (2006): 81-98.

More, Max. “Transhumanism: Towards a Futurist Philosophy.” Extropy, 6 (1990): 6-12.

More, Max. (2013) The philosophy of transhumanism, In The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (eds M. More and N. Vita-More), John Wiley & Sons, Oxford. doi: 10.1002/9781118555927.ch1

Murday, J. S.; Siegel, R. W.; Stein, J.; Wright, J. F. (2009). Translational nanomedicine: Status assessment and opportunities. Nanomedicine: Nanotechnology, Biology and Medicine, 5(3). 251–273. doi:10.1016/j.nano.2009.06.001

Origgi, Gloria. “Is Trust an Epistemological Notion?” Episteme 1, no. 1 (2004): 61-72.

Pagin, Peter. “Knowledge of Proofs.” Topoi 13, no. 2 (1994): 93-100.

Pagin, Peter. “Compositionality, Understanding, and Proofs. Mind 118, no. 471 (2009): 713-737.

Qureshi, M. Atif, Arjumand Younus and Arslan Ahmed Khan Khan. “Philosophical Survey of Passwords.” International Journal of Computer Science Issues 1 (2009): 8-12.

Sasse, M. Angela, Michelle Steves, Kat Krol, and Dana Chisnell. “The Great Authentication Fatigue – And How To Overcome It.” In Cross-Cultural Design, edited by PLP Rau, 6th International Conference, CCD 2014 Held as Part of HCI International 2014 Heraklion, Crete, Greece, June 22-27, 2014: Proceedings, 228-239. Springer International Publishing: Cham, Switzerland.

Scalambrino, Frank. Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation. London New York: Rowman & Littlefield International, 2016.

Simmel, Georg.  The Philosophy of Money. London: Routledge and Kegan Paul, 1978.

Simon, Judith. “Trust, Knowledge and Responsibility in Socio-Technical Systems.” University of Vienna and Karlsruhe Institute of Technology, 2013. https://www.iiia.csic.es/en/seminary/trust-knowledge-and-responsibility-socio-technical-systems

Syverson, Paul and Iliano Cervesato. “The Logic of Authentication Protocols.” In Proceeding FOSAD ’00 Revised versions of lectures given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design on Foundations of Security Analysis and Design: Tutorial Lectures, 63-136. London: Springer-Verlag, 2001.

Williamson, Timothy. Knowledge and its Limits. Oxford University Press on Demand, 2002.

Van Der Meyden, Ron and Thomas Wilke. “Preservation of Epistemic Properties in Security Protocol Implementations.” In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge (2007): 212-221.

Vinge, Verner. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio March 30-31, 1993, NASA Conference Publication 10129 (1993): 11-22,

Verma, S., K. Vijaysingh and R. Kushwaha. “Nanotechnology: A Review.” In Proceedings of the Emerging Trends in Engineering & Management for Sustainable Development, Jaipur, India, 19–20 February 2016.

Zins, Chaim. “Redefining Information Science: From ‘Information Science’ to ‘Knowledge Science’.” Journal of Documentation 62, no. 4, (2006). 447-461.



Categories: Articles

Tags: , , , ,

1 reply

Trackbacks

  1. Special Issue 4: “Social Epistemology and Technology”, edited by Frank Scalambrino « Social Epistemology Review and Reply Collective

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading