Fuller, Steve. 2013. “Against consensus — but to what end? Reply to Riggio.” Social Epistemology Review and Reply Collective 2 (3) 25-31.
Please refer to:
- Fuller, Steve. 2012. “Social Epistemology: A Quarter-Century Itinerary.” Social Epistemology 26 (3-4): 267-283.
- Riggio, Adam. 2012. “A process of truth: A reply to Steve Fuller’s essay on the silver anniversary of Social Epistemology.” Social Epistemology Review and Reply Collective 1 (9): 46-52.
Thanks to Adam Riggio (2012) for taking the time to develop his own position in relation to my own. While we agree on many specific matters, I am unsure that our conclusions point in the same direction. However, let me start by crediting Riggio for appreciating the long-standing anti-consensualism of my social epistemology.  This has put me at odds with most people — excepting the Popperians, of course — who one might consider social epistemologists. These include: the Habermasian deliberative democrats who strive for a non-coercive normative consensus, the Foucaultian disciplinarians who take the existence of any normative consensus as ipso facto coercive, and last (and perhaps least) the analytic philosophers whose preoccupations with trust, expertise, and authority are all in the aid of forging some normatively appropriate consensus that corresponds to whatever passes for political correctness. Moreover, recently even rhetoricians, from whom one might expect a more instinctively agonistic approach to epistemic matters, have rallied around a version of analytic social epistemology that looks to expert consensus as a safe haven in the stormy debates over the status of, say, global climate change or Darwinian evolution (Ceccarelli 2011).
That having been said, I do not believe that Riggio and I endorse anti-consensualism for the same reasons or even in quite the same spirit. Riggio appears to equate anti-consensualism with a kind of scientific pluralism, if not epistemic relativism more generally. In his mind, it seems, consensualism is associated with universalism of the sort that, say, the logical positivists were trying to pursue as the ideal of “unified science”. He thinks that when I oppose consensualism, I am opposing this position as well. This is wrong. As a matter of fact, I am quite sympathetic to the positivists’ universalist project, especially before it provided the basis for the analytic philosophy of science establishment in the United States in the 1950s. However, the point I would stress — as the positivists themselves did — is that any ideals of unified science would need to be socially constructed because the sciences themselves tend to develop quite centrifugally. Indeed, logical positivism may be partly understood as an attempt to redress the proliferation of would-be expertises and authorities that threatened to turn the knowledge system in the 1920s into a Tower of Babel that could easily fool a public that, thanks to a combination of full adult suffrage and the advent of broadcast media, was quickly turning into a “mass democracy”. The positivists’ “social constructions” were, of course, the various cross-disciplinary linguistic regimes and publicly accountable testing standards by which they have come to be stereotyped, at least among professional philosophers. 
Perhaps it is not obvious why I would favour the spirit of universalism pursued by the logical positivists — and in the 19th century by the German idealists. In short, the drive to universalism addresses several social-epistemological problems at once: (a) it provides pushback to the anti-democratic “rule by experts” that analytic social epistemologists embrace as manifesting a naturalistically grounded self-organizing “division of cognitive labour” (Kitcher 1993); (b) it recovers the self-transcending character of the scientific project, the calling card of which is the regular challenge of inductive expectations, as in Popper’s Bacon-inspired experimentum crucis, a consequence of which is that one needs to be mentally prepared to take a few steps back to go many steps forward; (c) it puts the autonomous individual back in the epistemic frame by setting out “decisions” that must be regularly taken about one’s preferred research direction that exposes the individual to varying degrees of benefit and loss of various sorts (more about which below); (d) it subverts what might be called “horizontal scientism”, which results when a scientific theory that is successful in one domain expands to colonise domains that are not equipped politically and/or epistemologically to resist it in the recommended Popperian manner. 
A peculiar artefact of scientific epistemology, which betrays its roots in religious epistemology, is the tendency to suppose that a “theory choice” is always about selecting just one from several alternatives and that you should always select the one theory best supported by the available evidence, as if your life depended on it. What follows then are enormous discussions about how to parse the various theories and what to count properly as “evidence”, with each (analytic) philosopher jockeying for the results — canonised in the mid-20th century as “demarcation criteria” — to confirm her personal theory choices. Posed this way, which is also how science appears inside a Kuhnian paradigm, academic disciplines acquire special significance as custodians of “regional ontologies” with the power to parse theories and assign evidence in epistemologically approved ways. Analytic social epistemologists have dedicated considerable — and characteristically misplaced – ingenuity to arguing for why our epistemic responsibilities should be delegated to these local authorities. 
However, as the provocative self-styled “risk engineer”, former Wall Street trader, and black swan detector Nicholas Taleb (2012) would put it, disciplines are “fragile”: Their reliability depends on the extent to which they control their environments. In other words, disciplines presume the existence of “known unknowns” but not “unknown unknowns”. The “known unknowns” are captured by the classic laboratory set-up, in which variables are clearly identified and experimental outcomes are explicable in terms of them, albeit in ways that may falsify the hypotheses under consideration. But then as science and technology studies researchers after Latour (1987) have shown, extending that level of control beyond the walls of the laboratory proves to be an ongoing struggle, part of the solution to which — as Foucaultians are quick to point out — has been to remake the world to reflect disciplinary standards.
But this is an incredibly inefficient way to live. It effectively converts what Kuhn called “anomalies” into enemies of some presumptive conception of progress. However, you cannot learn from your difficulties and errors if you insist on treating whatever resists your will in such negative terms. Rather you need to see errors as investments in a better future.  This is Taleb’s trader-savvy way of capturing the Popperian ethic — an ethic, after all, which did no harm to George Soros, a Popper student from the 1950s who has never hid that fact. (The difference is that Soros capitalises on the errors of others, whereas Taleb wishes to capitalise on his own errors; hence, his guru status.) In that case, the most appropriate epistemological attitude is that of the spread bettor who deliberately underestimates market leaders and overestimates market trailers. Concretely, this means that one should, say, accept Darwinism as the dominant account of natural history, but not with the vehemence of the scientific establishment, while remaining more than acceptably open-minded to the idea that life’s evolution has been the product of a directive intelligence. Taleb’s strategy goes against the grain of statistical significance testing, as canonised by the Christian eugenicist Ronald Fisher, who was simply concerned with ascertaining the likelihood that one’s hypothesis is true rather than with the costs — and benefits — of its possibly being false (Ziliak and McCloskey 2008). 
In more familiar terms, what I am describing is how a non-believer might render himself, in Taleb’s terminology, “antifragile”, by taking Pascal’s Wager about God’s existence. But keeping within the analytic philosopher’s comfort zone, I am alluding to what Hilary Putnam (1978) christened as the “pessimistic meta-induction”; namely, the historical tendency for most general theories — though perhaps not the specific facts — to be superseded within a century. In some policy relevant contexts, such as global climate change, such an observation carries an added poignancy. After all, what should you do if the leading theories of climate change say that the planet is doomed within fifty years, when we also know that such theories themselves have a life expectancy of only half that time? One time-honoured answer is simply to become a sceptic or, put more delicately, a Wittgensteinian who regards the very trying to anticipate the future course of inquiry to be a chimerical pursuit. However, I believe that such a response is to give up on rationality, especially the meta-level knowledge that can be acquired from studying the history, philosophy and social studies of science.
At the same time, going somewhat against the spirit of Riggio’s analysis, I do not believe that countenancing “a cry of a thousand subjugated voices” is how we prepare for a radically different or better future. My own strategy, which dovetails with Taleb’s advocacy of antifragility, is to study the array of past losers as the template from which the next unexpected winner will be found. I have always had a soft spot for that “return of the repressed” in the history of science that the followers of Imre Lakatos call “Kuhn Loss”; namely, features of the old paradigm that fail to survive in the new one but which nevertheless may end up coming to haunt the new paradigm down the road in the form of unresolved anomalies (Fuller 2003: chap. 9). Of course, the old paradigm does not simply reverse the new one, as in a cyclical view of history; rather, the old is reasserted in light of its having lived through the new. As in a Hegelian sublation, the original errors are cancelled and the core of truth is preserved.
Take the case of Darwinism’s gradual triumph as the leading account of life on Earth. Despite Darwin’s own best efforts — and the strained exertions of such latter-day Darwinizers as Michael Ruse — a question mark has always lingered over the “intelligibility” of natural selection as an essentially blind process that somehow manages to result in the array of species that characterise natural history. In practice, this concern has been mitigated by a seemingly indefinite extension of the age of the Earth — and the universe. The Darwinist reasons that the more time at nature’s disposal to work its magic (not to be confused with intelligent design!), the more opportunities there are for life’s complexities to be hammered out through a glorified trial-and-error process. However, there is no particular reason to think that the more we learn about nature, the older nature will turn out to be. On the contrary, we may have good reason to hope that we have overestimated the Earth’s age. As Thomas Henry Huxley had realized by the end of his life, an increasingly aged and blind Earth confounds whatever grounds we might have had for ascribing meaning to life. Young Earth Creationists (YEC) may suffer from many epistemic liabilities, but Huxley’s is certainly not one of them. What appears to secular eyes as the overblown significance that religious believers attach to particular events reflects the sense of urgency and purpose that comes from assuming that time’s end, like its beginning, is closer than we might wish to think.
I say all this because there is potentially a big existential payoff from betting against today’s elongated temporal horizons. One might say that that the very idea of humanity is axiologically unsustainable if we believe that the universe originated in the very remote past and will continue into the very remote future. This orientation, combined with an awareness that humans are located on a planet in a galaxy that is in no palpable sense central to the known universe, invites the conclusion that our species existence, even if it ends up lasting several million years, is still a drop in the ocean of cosmic space-time. In that case, the great ancient thinker who came closest to this realization without ever having to test a hypothesis — Epicurus — would seem to be vindicated in his therapeutic indifference to the truth or falsity of universal statements. Any truth that seems to stand the test of time will eventually be overturned, so there is nothing worth getting very excited about epistemologically. To be sure, this is one way of understanding the pessimistic meta-induction mentioned earlier. In fact, it was how Montaigne, the great Renaissance Epicurean essayist originally did, when considering the prospect that Copernicus, who appeared to be on the verge of overturning the 1300-year reign of Ptolemy, would himself be overturned in the future.
However, Montaigne’s resigned attitude is persuasive only if our second-order knowledge can never substantially correct our first-order knowledge. I mean learning not only that no dominant theory prevails forever, but also that no successor theory entirely eliminates the previously dominant theory. In the case of YEC science, it is largely dedicated to testing — with the aim of falsifying — the methodologies currently in use by geologists, biologists and cosmologists to infer extreme antiquity in nature. Should YEC succeed, the order of natural historical events would remain largely as we know them but the timeframe within which they have transpired would be compressed. In other words, the YEC research programme specifically targets the exceptionally desultory character of natural selection. But what sort of science would lie on the other side of a YEC victory? Most of the day-to-day activities of the disciplines that constitute the current evolutionary synthesis would remain intact. However, the theory itself and its regulative ideals would be radically altered — but it is most unlikely that the resulting science would conform to the Bible-thumping “six day” (or even 6000-year) stereotype that would have test tubes replaced by prayer books.
Indeed, I would rate the chances of a successful YEC science realizing its opponents’ worst epistemic nightmares alongside the chances that current research into the genetic bases of humanly relevant traits will result in another Nazi-style genocide. In the latter case, if significant discoveries were to be made in behavioural genetics, the public probably would come to be more tolerant of abortion and infanticide — and that might look like genocide to someone with a homoeopathic approach to policy-making. But by the time that would happen, even “abortion” and “infanticide” would be probably travelling under more palatable rubrics. Similarly, a post-YEC biology would remove the philosophically coded, theologically hushed tones in which teleology in nature is currently discussed. And while “God” might remain a step too far for secular tastes, a more Deist-sounding “intelligent designer” might function quite well in this context. For this turn of events to happen, all that YEC supporters really need — and could reasonably expect — to achieve is the removal of, say, three zeroes from the current age of the Earth. That would be enough to shift how we interpret the miracle of reality’s comprehensive intelligibility from our luck to our privilege.
However unlikely one deems the prospects of YEC’s proposed shrinkage of geological time, history suggests that the probability is high enough for the antifragilist to invest in its pursuit, since it would cost relatively little to support a critical mass of YEC researchers and the payoff arising from their success would be enormous — if not for everyday science, certainly for the larger social-epistemological embedding of the scientific project of understanding the nature of life. To be sure, that payoff would hardly be to everyone’s liking, but then again one of the world-historic virtues of scientific breakthroughs is their capacity to set into effects a chain of effects that result in transformations that could otherwise only have been undertaken with considerable bloodshed. This I take to be the Realpolitik lesson of Latour (1988), which may turn out to be his most profound thesis.
I am completing this response on the day that the Observer has reported on the vitriol with which evolutionary biologists have received the finding — made by an international bioinformatics consortium based at Cambridge University — that much of so-called “junk DNA” may not be junk after all, as it turns out to be functional in cell regulation (McKie 2013). The reporter failed to note the intelligent design subtext that informs the vitriol, given that it has been a prediction long made by its proponents (Wells 2011). At the moment, biologists generally believe that most of DNA is benignly dysfunctional, as one might expect of a chance-driven conception of natural selection. To be sure, the Cambridge discovery — assuming it holds up to the higher than usual standards to which the Darwinists will now hold its discoverers — would not directly falsify Neo-Darwinism. But it would challenge researchers’ default expectations concerning a high degree of non-utility in the construction of life.  Whatever one thinks of intelligent design theory, or its proponents, the Cambridge finding can only be good news for those who see scientific inquiry as the hallmark of the human condition. I understand why people in institutionally vulnerable positions might not wish to champion such an easily misunderstood standpoint, but widespread failure to do so ends up undermining the very motivation that humanity has for pursuing our increasingly risky scientific way of being.
Ceccarelli, Leah. 2011. “Manufactured Scientific Controversy: Science, Rhetoric, and Public Debate.” Rhetoric and Public Affairs 14: 195-228.
Forman, Paul. 2012. “On the historical forms of knowledge production and curation.” Osiris 27: 56–97
Fuller, Steve. 1985. Bounded Rationality in Law and Science. Ph.D. dissertation in History and Philosophy of Science, University of Pittsburgh.
–––––. 1988. Social Epistemology. Bloomington IN: Indiana University Press.
–––––. 2000. The Governance of Science. Milton Keynes UK: Open University Press.
–––––. 2003. Kuhn vs Popper. Cambridge UK: Icon.
Fuller, Steve and James Collier. 2004. Philosophy, Rhetoric and the End of Knowledge. 2nd ed. (1st ed.1993, Steve Fuller). Mahwah NJ: Lawrence Erlbaum Associates.
Kitcher, Philip. 1993. The Advancement of Science. Oxford: Oxford University Press.
Latour, Bruno. 1987. Science in Action. Milton Keynes UK: Open University Press.
–––––. 1988. The Pasteurization of France. Cambridge MA: Harvard University Press.
McKie, Robin. 2013. “Scientists attacked over claim that ‘junk DNA’ is vital to life.” Observer (London), 24 February. http://www.guardian.co.uk/science/2013/feb/24/scientists-attacked-over-junk-dna-claim
Putnam, Hilary. 1978. Meaning and the Moral Sciences. London: Routledge.
Riggio, Adam. 2012. “A process of truth: A reply to Steve Fuller’s essay on the silver anniversary of Social Epistemology.” Social Epistemology Review and Reply Collective 1 (9): 46-52.
Taleb, Nassim Nicholas. 2012. Antifragile: How to Live in a World We Don’t Understand. London: Allen Lane.
Wells, Jonathan. 2011. The Myth of Junk DNA. Seattle: Discovery Institute Press.
Ziliak, Stephen and Deirdre McCloskey. 2008. The Cult of Statistical Significance. Ann Arbor MI: University of Michigan Press.
 I specifically have in mind Neo-Darwinism, which Riggio (2012) presents purely as a theory of biology, without noting its own imperialist aspirations (e.g., the ‘pan-selectionism’ supported by, among others, Richard Dawkins, Daniel Dennett, sociologist Garry Runciman, and physicist Lee Smolin), which presumably is part of what bothers Jerry Fodor in his Bloggingheads debate with Elliott Sober, though Sober himself does not explore that prospect. While Fodor does not generally attack the dominant theories of regional ontologies, in this case, the theory might leverage its current popularity to become what he would regard as a pseudo-universal law.
 Kuhn’s renegade student, Paul Forman (2012), has excavated the lingering religious associations in the very idea of “discipline” (as in the monastic and mendicant orders who staffed the medieval universities) in modern discipline formation, whereby the public’s trust comes to be invested in professional scientists as a new “clerisy”, to use a term popular in the French Third Republic, the first avowedly secular regime in that country’s history.
 To be sure, given that at the moment we lack betting parlours for science (though I have advocated this apparently unpopular idea for quite some time), the exact nature of the bets themselves remains unclear — though, I suppose in the case of academics, it could be a part of one’s salary regularly set aside until a specific date. See Fuller (2000), p. 149-50, which was stridently resisted by Jerome Ravetz in a review in Public Understanding of Science.
 I note that Fisher was both a Christian and a eugenicist — Mormons come to mind as a comparator — because he presents the hypothesis-tester as someone who remains impervious to the outcomes of her experimental trials except for how they bear on the project of cognitively representing reality (a.k.a. the search for the truth). This would suggest that the normative standpoint of the scientist is that of a deity above and beyond temporal concerns.
 A telling bit of Kuhnish nastiness is that the evolutionists accuse the Cambridge team of being “badly trained technicians”, suggesting that they were not indoctrinated into the Neo-Darwinist paradigm before engaging in their work. Perhaps it should come as no surprise that the Cambridge team’s main attackers come from the American South, a touchy area for matters relating to evolution.