In October 2019, I was contacted by Michael Solana, Vice President of Peter Thiel’s Founders Fund, to participate in a new kind of counter-cultural festival called ‘Hereticon’. The COVID-19 pandemic held it up for more than two years, but in January 2022 it finally occurred in Miami Beach. Originally, I offered the following seven heresies as potential topics for my talk, and all were received enthusiastically. However, only one could be chosen for actual presentation. It was first thought that I might debate the ‘Martyrs for Science’ heresy with an opponent. However, in the intervening period, the world changed, and I ended up doing ‘Must Human Be an Ape?’, which was basically a stripped-down version of my recent SERRC piece. Nevertheless, the originally seven heresies remain of interest, which I and/or others may take up in the future. Enjoy! … [please read below the rest of the article].

Image credit: Ricardo Samaniego via Flickr / Creative Commons
Fuller, Steve 2022. “Seven Heresies.” Social Epistemology Review and Reply Collective 11 (2): 49-51. https://wp.me/p1Bfg0-6yF.
🔹 The PDF of the article gives specific page numbers.
❦ Fuller, Steve. 2022. “Transhumanism in the Schools Redux.” Social Epistemology Review and Reply Collective 11 (2): 19-21.
Do We Need Martyrs for Science?
Science made its biggest leaps in progress when it operated in an ‘ethics free zone’. To be sure, many humans and animals were tortured and died in the process. But is the moral problem here that they suffered this fate or that they failed to be adequately recognized for their efforts? The universal presence of ‘institutional review boards’—at least in Western countries—assumes the former is the case. These boards can deem a research project too risky, notwithstanding its ability to attract consenting participants. But these strictures are an overreaction to the findings of the 1946 Nuremberg Trials: In order to avert the worst case scenario reflected in Nazi scientific abuses, the strictures fail to do justice to humanity’s spontaneous embrace of risk, which can be legitimately channelled for scientific purposes. Instead what is needed is adequate recognition for people who knowledgeably and voluntarily agree to participate in risky experiments. This recognition requires that those who are unwilling to take such risks subsidize insurance policies for those willing to do, since there’s a strong chance that the risk-averse will benefit from the outcomes of these risky experiments, even if they are negative and—more to the point—even if the risk-seekers themselves do not benefit. This could even be the basis for rewriting the social contract needed for the welfare state. And of course, this sort of policy would open the door to thinking in terms of ‘heroes of the (scientific) revolution’ and ‘martyrs for science’.
Is the World Younger Than We Think?
The surest route to overturn Darwin’s theory of evolution would be to undermine the methods used to measure the age of the Earth, which open up to the more general question of how we measure the age of the universe. Changes in both measuring techniques and measurements have resulted in estimates that differ by orders of magnitude. Moreover, there is currently considerable dispute among cosmologists on how the Hubble constant is calibrated to measure the age of the universe. In this respect, ‘Young Earth Creationists’ may be on to something, when they question the age of fossils. In any case, what is at issue is not the sequence of events in evolution, but simply the amount of time it took to achieve them. Darwinism is persuasive just as long as people believe that a long enough time has passed to allow a fundamentally undirected process to generate the level of stable complexity associated with forms of life. However, the shorter the timespan that is allowed for evolution, the less likely that the process could have happened without some ‘intelligent design’ guiding it to the ‘right’ outcomes.
Death as a Way of Life?
Mortality has been traditionally seen as definitive of the human condition, which in turn has informed moral ideas about the ‘sanctity’ and ‘preciousness’ of human life. But in a world where people are living longer—and science may even be on the verge of enabling us to live indefinitely—death will increasingly come to be seen as a lifestyle choice and perhaps even a moral obligation undertaken on behalf of future generations. At stake here is not only the environmental burden placed on a planet housing an indefinitely growing population but also the loss of opportunities to make fresh starts and move in new directions. Such are the great liabilities of allowing people to live very long healthy lives: Each successive generation will be burdened by the presence of fully competent elders who make it difficult—if not impossible—for the young to rewrite if not outright ignore the past for their own purposes. In this context, death-oriented practices such as suicide, euthanasia and even murder will come to be seen with greater moral leniency.
The End of the Social Sciences?
The social sciences enjoy a peculiar relationship to their subject matter, human beings. On the one hand, these disciplines are supposed to reflect the uniqueness of humanity, which includes a sense of freedom, autonomy and introspection that is not reducible to the categories of the natural sciences. On the other hand, the social sciences are also predicated on the idea that people cannot speak for themselves but require experts in these fields to collect and organize their experience in ways that form knowledge and inform policy. Both of these countervailing assumptions are under severe strain now and are bound to demolish the social sciences sometime in this century. On the one hand, humanity’s uniqueness is being challenged by a more sophisticated understanding of the natural world, which is also reflected in people’s greater attachment to animals and the environment. On the other hand, people generally feel that they have more access to information than ever before, and hence want more opportunities to exercise it: that is, direct democracy over representative democracy. This could end up undermining most social science methodologies, which presume that people are docile subjects waiting to be studied. We already see this happening in the case of people refusing to give honest answers in public opinion surveys and polls.
Must a Human be an Ape?
Until the mid-18c, when humans came to be regularly identified as Homo sapiens, humans were generally thought to be animals but not necessarily related so directly to apes. Evidence for this can be found in the prevalence of human-like attributions to all sorts of animals (nowadays derided as ‘anthropomorphism’). What this reflects is that ‘human’ has been traditionally a normative category—in other words, it has been about a set of morally and intellectually relevant qualities that in principle any expressive being could have. Usually one had to be educated to become a human; hence the ‘humanities’ was originally the training of the whole person: how to speak, how to behave, etc. In a world that is increasingly open to quite sophisticated ideas of ‘animal rights’ and even ‘android rights’, we will reach a point in which something comparable to the Turing Test to judge who does and does not count as human. Such a test would be blind to the material composition of the candidate in question. This means that in principle an animal or a machine may pass as a human whereas members of Homo sapiens may not. Thus, ‘dehumanization’ would acquire a whole new meaning—yet it would be in the context of ‘expanding the moral circle’.
Is Life a Blessing or a Curse?
Perhaps it is not surprising that the Bible doesn’t fixate on the carbon basis for life. However, it does say that the first humans were created by God blowing some air into clay, which suggests a silicon-based origins story. Cybernetics founder Norbert Wiener explored this idea in his final book, God and Golem, Inc., mainly to provide a moral framework for understanding artificial intelligence research. Although Wiener himself doesn’t spell out its implications for the normal carbon-based story of life, it may well be that life only became carbon-based after the Fall, which explains all the carbon-based problems the world has faced: ranging from our own mortality to our devastation of the planet. Furthermore, a major proponent of ‘intelligent design theory’ (aka scientific creationism), Michael Behe argues in his recent book, Darwin Devolves, that genetically speaking, what evolutionists call ‘adaptation’ appears from a molecular biological standpoint like the systematic disabling of the organism’s full potential. Perhaps biological science’s criteria of ‘species fitness’ mistake liabilities for virtues—as might be expected of ‘fallen’ creatures.
Anyone for Cognitive Agriculture?
Unless quantum computing makes a quantum leap into concrete reality, an inconvenient truth about the relationship between computer processing and brain processing is that the latter is vastly more energy efficient as medium for the conduct of thought—even taking into account the brain’s well known liabilities. The best AI systems, such as IBM’s Watson, use vast amount energy to outperform the brain in a relatively narrow range of domains in which the brain normally operates, albeit fallibly. In an ecologically conscious world, it follows that some significant funding should be diverted from the development of silicon-based AIs to the cultivation of embryonic stem cells so that they can develop the neural capacity that would enable them to merge into an artificial brain that would function as an energy efficient organic computer, a la the ‘precogs’ in Philip K. Dick’s Minority Report. The harvesting of stem cells for this purpose would be called ‘cognitive agriculture’. It may be the best long-term solution to lessen humanity’s carbon footprint on the planet.
Steve Fuller, S.W.Fuller@warwick.ac.uk, Auguste Comte Chair in Social Epistemology, Department of Sociology, University of Warwick.
Categories: Articles
Leave a Reply