Editor’s Note: Beginning in 2012, I asked Steve Fuller to provide a Christmas greeting—or, end-of-year reflection. As the SERRC grew, I invited contributions from our members. In this tradition, and at this time of resolutions, Steve asks us to consider the post-truth condition and the “metaverse”—an “hypothesized iteration of the Internet, supporting persistent online 3-D virtual environments through conventional personal computing, as well as virtual and augmented reality headsets” (Wikipedia). In the metaverse, we are left to struggle with questions regarding how we will come to understand knowledge as more than just a game of seeking advantage and adherents.
My great thanks to the SERRC contributors and readers over the last decade. We have accomplished much by bucking academic convention. In so doing, we realize knowledge together … [please read below the rest of the article].
Fuller, Steve. 2021. “Is the Metaverse the New Metaphysics?” Social Epistemology Review and Reply Collective 10 (12): 68-72. https://wp.me/p1Bfg0-6oL.
🔹 The PDF of the article gives specific page numbers.
In a year with many twists and turns, the significance of one event may outlast its hype, namely, Mark Zuckerberg’s rebranding of Facebook as ‘Meta’, to coincide with the launch of the ‘Metaverse’, as the main platform for its future development. The event, which took place at the end of October 2021, was notable in two ways from a business standpoint. First, it mirrored Google’s 2015 self-transcendence into ‘Alphabet’, especially with the comparably more abstract name, which also captures Facebook’s intent to a tee. After all, ‘Alphabet’ made it plain what many had long suspected Google to be, namely, the grandest of General Artificial Intelligence projects, in which user data serve as inputs in search of finding the ultimate algorithm. I shall discuss the equivalent Facebook project below. Second, the christening of ‘Meta’ follows an extended period of bad publicity and lawsuits for Facebook, mainly due to the company’s inability to sort out whether it’s simply a publishing platform or a publisher in its own right. One imagines that The Zuck has a Hegelian in marketing, who suggested that the ‘Metaverse’ could overcome the need to choose between the two options by obliterating the distinction. And it might work.
Publisher and Platform
Historically, it had been relatively easy to tell the difference between a publisher and a platform. It was modelled on the difference between an enterprise and the space it rents to set up business. That distinction was easy to draw because the space existed first—typically already owned by someone—and then an entrepreneur came along who, by the miracle of exchange, persuaded the owner that it was to their mutual advantage for the one’s enterprise to occupy the other’s space. It would be difficult to overestimate the significance of this scenario for the history of capitalism. Indeed, it may be classical political economy’s Covenant moment. However, the history of Facebook turns that scenario inside out. The platform came about because Harvard student Zuckerberg in 2003 wanted to turn various kinds of spontaneous judgements—e.g., who is ‘hotter’—into a market, subject to mutual and aggregate inspection to inform further judgements. The mutual inspection benefits the platform users, while the aggregate inspection informs the platform owners, who in turn sell that data for marketing purposes.
In short, Facebook’s founding moment came when it commodified judgements over a wide range of previously unrelated domains with a publicly available Like/Dislike toggle. It effectively invented an automatic price mechanism, thus creating the capacity to generate markets for virtually anything. The Holy Grail of free market capitalism, ‘spontaneous self-organization’, was converted from a quasi-mysterious ‘emergent’ collective property of exchange relations (aka ‘invisible hand’) into a reliable social technology, from which a company might profit—in Facebook’s case, handsomely. But Facebook has imposed a heavy toll on the free marketeers. They have been forced to give up their self-understanding to realize their dreams. Unlike the early computer modelers, who explicitly denied the premise of freedom in human affairs, the social media giants have always aimed to model what freedom looks like in practice. Put in the somewhat paradoxical language of German idealism, such a clear conceptualization of freedom’s utopia set the groundwork for its full determination. More bluntly: You wanted ‘Freedom’; well, here’s what it looks like, made to order!
In this respect, Facebook had been anticipated by Wikipedia, launched in 2001 by Jimmy Wales, a philosophy student at Ohio State University and devotee of Friedrich Hayek. What Wales had seen, perhaps more fundamentally than Zuckerberg, was that as computer technology was becoming the infrastructure of the global economy, ordinary people took that environment as the natural platform for self-expression. This was largely due to that infrastructure suddenly delivering all sorts of information that cut against more ‘legacy’ channels of epistemic transmission, especially the state and academia—and the media they support. In such a world of relatively high levels of education, computer literacy and democracy, Wales adapted the participatory Wiki-style of platform to launch his compendium of human knowledge. It proved to be a quick and enduring success. No less than Cass Sunstein, the legal weathercock of our times, had already observed in 2007 that Wikipedia was being cited in US Supreme Court decisions more often than the Encyclopedia Britannica (Fuller 2018, 126).
Aggregators, Producers, and Post-Truth
Wikipedia paved the way for the epistemic levelling that characterizes what I have called our ‘post-truth condition’. However, Wikipedia’s own role has been complex, due mainly to the interaction effects of its three founding principles: ‘Neutral Point of View’ (NPOV), ‘No Original Research’ (NOR) and ‘Verifiability’. Taken together they have served to popularize and exacerbate academia’s citation culture, one of its most regressive features. It’s regressive because, in the words of virtual reality pioneer Jaron Lanier (2013), it privileges ‘content aggregators’ over ‘content producers’. In other words, it is biased toward present uses, as if that were all that is worth exploiting from past effort. On this view, a citation is an authorized collection point for past efforts, regardless of source—and whatever else of value it might contain. Thus, academics routinely cite their colleagues as simply an insurance policy that allows them to offload responsibility if what they say turns out to be false. They rarely read the colleagues they’ve cited, let alone what those colleagues have cited. Nevertheless, this practice conforms to the pre-copyright use of ‘author’ to mean what we would now call ‘editor’ or even ‘compiler’. Indeed, copyright was developed in the late eighteenth century to extend the title ‘author’ to writers who had been often hired as anonymous workers by those called ‘authors’ (Fuller 2002, 98-105). In this respect, the US pulp fiction writer James Patterson, who openly admits farming out to others the writing of the works that bear his name, harks back to this earlier era.
Lanier’s concerns can be seen as replaying in a high-tech key these original struggles over the assignment of intellectual property. And they are far from being misplaced. Indeed, in their conceptual manifesto of what was soon realized as Google, Sergey Brin and Larry Page treated the academic citation culture (positively!) as a model for the future of the recently launched World Wide Web (Brin and Page 1998). Indeed, Google followed up on that initial vision by extending academia’s sense of ‘path dependency’ from what Robert Merton (1973) had first observed as the ‘principle of cumulative advantage’ (aka ‘Matthew Effect’) in the flow of organized knowledge to the ordering of search engine results. Thus, Google’s users are led to believe that the websites appearing at the top of search results are in some sense the most ‘authoritative’. To be sure, the Google search engine is programmed with an algorithm that contains some opaquely defined biases. This has led the US lawyer Frank Pasquale (2015) to speak of our living in a ‘Black Box Society’. Nevertheless, this ‘black box’ is no more opaque than the machinations of human ‘peer reviewers’ that serve to generate the highly cited academic texts first observed by Merton. You can begin to see why Karl Popper hated induction. It’s possible to exert power through enforced correlation even if the underlying causation remains obscure to those subject to its regime.
To its credit, Wikipedia, which typically appears in the top 3 of Google searches, enables its users—at least in principle—to ‘hack’ the entries in a way that would address some of Lanier’s concerns. By consulting its ‘History’ and ‘Talk’ pages, Wikipedia’s users can retrace the work of all the ‘content producers’ behind an entry’s current form. Whether that knowledge will then enable users to alter the entry depends on Wikipedia’s own hierarchy of self-selecting editors, who prove their merit through the intensity, duration and efficacy of their efforts. (It looks like the angelic Ninth Sphere of Dante’s Paradise.) For better or worse, these ‘bottom-up’ gatekeepers define the normative bounds of acceptable knowledge on Wikipedia’s pages. In practice, the result has been a new kind of path dependency, one prescribed not by a lineage of similarly disciplined experts but by a mutually reinforced form of crowdsourcing. Yet the opacity of the process to ‘outsiders’ of the knowledge producing system remains. Here one is reminded of The Who’s ‘Won’t Get Fooled Again’. To be sure, a consequence is that Wikipedia has grown and even systematized many topics—especially relating to science fiction, the gaming world and popular culture more generally—that have been ignored or minimized by more academically authorized knowledge sources. Indeed, it may even be providing the basis for what I have called ‘Protscience’, modelled on the great sixteenth century ‘reform’ of Christianity (Fuller 2010).
The Future of the Metaverse
All this bears on the prospects for the Metaverse because path dependency is effectively how ‘alternative realities’ are generated. A limited set of possibilities are actualized under a specific set of initial conditions that constrain everything that follows, relatively free of external interference—perhaps including any dissonance that might result from memory of the unactualized possibilities. It is a hermetically sealed world, over which complete ‘modal power’ is exercised (Fuller 2018). In principle, the path charted in this alternative reality may extend indefinitely—but it may also self-destruct. This last point helps to explain the moral force of the post-truth condition. It may be posed as a question: Does the power of the scientific establishment (and other authoritative forms of knowledge) depend simply on its ability to enroll members and sideline opponents as it relentlessly pursues its default trajectory? It was precisely this prospect that Popper detested in Kuhn’s worldview, whereby a scientific paradigm is allowed to monopolize a field of inquiry until the pile up of unsolved puzzles makes further research unfeasible. Of course, a similar fate could befall Wikipedia’s own brand of epistemic expansionism, if it fails to meet its funding targets and needs to start charging editors and users.
My point here is that all dominant modes of reality start life as an alternative reality that acquires dominance over time by expanding its own terms of engagement, while ignoring—if not crowding out—those of its potential rivals and opponents. The Metaverse is the most ambitious project along these lines yet. It takes to the next level Google’s amplification of academia’s path dependent approach to reality by rendering the metaphor of ‘gaming the system’ literal. More precisely, 2-D goes 3-D—and the implications are more serious. Instead of simply crowding out dissenting voices, the impending Metaverse—or ‘virtual real estate’, to put it more accurately and simply—might lead us to forget that we have historically possessed a distinct range of rights and duties as a result of our specifically physical nature. With that prospect in mind, it may be worth dusting off a copy of A Hacker Manifesto (Wark 2004) to read in the spirit of at least a vaccine—if not a cure—for the forces of ‘TechGnosis’ (Davis 1998) that are about to be unleashed.
Brin, Sergey and Lawrence Page. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” Computer Networks and ISDN Systems 30 (1-7): 107–17.
Davis, Erik. 1998. TechGnosis. San Francisco: Harmony Books.
Fuller, Steve. 2018. Post-Truth: Knowledge as a Power Game. London: Anthem Press.
Fuller, Steve. 2010. Science: The Art of Living. Durham UK: Acumen.
Fuller, Steve. 200. Knowledge Management Foundations. Woburn MA: Butterworth-Heinemann.
Lanier, Jaron. 2013. Who Owns the Future? New York: Simon & Schuster.
Merton, Robert. 1973. The Sociology of Science. Chicago: University of Chicago Press.
Pasquale, Frank. 2015. The Black Box Society. Cambridge MA: Harvard University Press.
Wark, McKenzie. 2004. A Hacker Manifesto. Cambridge MA: Harvard University Press.