This exchange first appeared on Adam Riggio’s blog “Adam Riggio writes”. On 23 March 2015, Adam and Steve Fuller discussed Steve’s latest book Knowledge: The Philosophical Quest in History. Here, Part Four , “Honesty as Anarchy”, looks at Chapter 3,”Epistemology as Psychology of Science”, from the book.
We’ve talked about the epistemic implications of humanity’s divinity, how our scientific inquiries were conceived as bringing us closer to God, in touch with our divine nature. As I get into these other chapters, I find that the focus of your book is shifting to the epistemic implications of humanity’s profanity, how our distance from perfection is incurable.
This is a chapter about how psychological work into the minds of scientists was supposed to ground the power of science, but the field’s discoveries were ultimately disillusioning. If you think examining the minds and thoughts of scientists will show how special they are and the nature of their cognitive and methodological powers, you’re in for a rude awakening when you discover that scientists are humans too.
One of the most illuminating things I learned from reading some of your earlier works was the fight between Robert Boyle and Thomas Hobbes on how to pitch the new experimental techniques of scientific research that the Royal Society was developing. This was in your collaborative work with Jim Collier, Philosophy, Rhetoric, and the End of Knowledge.
Hobbes advocated total transparency: experiments were demonstrations, purposely artificial scenarios whose purpose was public education, not only about the principles and phenomena they helped discover, but also about how scientific investigation actually worked. Boyle wanted scientists to be a new order of mysterious authorities; the public would consider them a special class of people, a priestly order of material truths instead of divine ones. That way, science would be above politics.
Ironically for Hobbes’ modern reputation as an authoritarian, his conception of science was democratic, even anarchist. Science was a public enterprise in which anyone could participate as best they could or wanted. Boyle’s clericalism was a product of the English Civil War, just as Hobbes’ openness was.
Hobbes was honest about the political power of science. Boyle wanted scientists to stand above the orders of human politics so they’d be left alone while the kings and militiamen killed each other. You can’t have a sacred order if the magicians show everyone their strings.
Boyle’s idea worked well for a long time. But it was a matter of time before some discipline of knowledge developed that would give the game away. Psychology is the subject of this chapter, and later chapters explore history, sociology, and philosophy. Giving the game away is the anti-clerical move that shatters the myth of science as an order of men above humanity.
Scientists being ordinary people meant that their intuitions were just as flawed as the rest of humanity. There was no special scientific intuition better than the rest of ours, just the same jumble of contingently-developing habits. Research on probability theory shows just how ill-suited human intuitive judgments are to the actual workings of the world. Our instincts are completely unable to perceive trends in large data sets. Humanity’s psychological science showed that human reasoning abilities are universally fallible.
Karl Popper seems like a key figure in the philosophy of science with an answer to how we can practice good science—scratch that word, really it’s how we can practice good knowledge. We intuitively think in terms of confirmation and the truth of the obvious, not the superior methods of disconfirmation and falsification.
He came up with prescriptions of how we should think to become better at understanding the world. We improve how we understand the world through a literal revolution in our thinking as individuals, which is how I conceive of the purest political power of philosophy in Ecology, Ethics, and the Future of Humanity, forthcoming from Palgrave McMillan this August. It makes me realize that I should read more Popper, as I haven’t really gone near his work since my undergraduate years.
Modern skepticism in science is rooted in those discoveries of the last 100 years that scientists are “only human after all.” Most of the philosophy and literature with that basic message that I’ve encountered tends to be sadly pessimistic. To be only human associates humanity with our mistakes and stupidities, and postulates that our virtues and productive powers are grounded in the divine elements of our nature.
I think differently. We’re the creatures who dared to become sublime. We ended up being theatrical instead, a 7-billion-strong walking catastrophe of ridiculous, twisted ingenuity. And it probably won’t end well (at least, the next couple of centuries won’t be much fun). But it’s all our doing!
The mad self-destructive dance of a creature forged in the image of God is a sad disappointment. Our mistakes would drop our end of the bargain to approach God in our own creations and lives. But as creatures of blood, mud, and guts, we’ve made ourselves remarkable whether we collapse in accidental self-destruction or we somehow transcend the worst of our nature. There’s never been anything like us on Earth, and there never will be again. That’s something to be proud of.
❧ ❧ ❧
It turns out that as I begin to write this, I am about to speak at Stevens Institute of Technology, just across the Hudson from Manhattan. My host, the Scientific American writer John Horgan, is also taken by my fallibilist, anti-authoritarian stance on science. There is much to say about your post at several levels. I can only address a few of them in any depth.
The point I stress about the history of psychology, especially if we take seriously its early roots in introspection, is that the 19th century guys believed that people with scientific training were more ‘conscientious’ in the sense of realizing that when they have made mistakes they should endeavour to figure just how bad they are – but in any case, acknowledge and correct them.
Non-scientists might try to hide their mistakes, in the spirit of Adam and Eve after eating the apple.  In this context, shame is an expression of cognitive closure that prevents the collective growth of human knowledge. It is a feeling that arises from an exclusionary sense of self-regard (i.e. I am simply ‘me’ in a sense that I alone determine, and not part of a greater whole, so you don’t need to know if I don’t tell you – and if you do find out, I feel ashamed).
People keen on protecting privacy as the ultimate human right should perhaps think twice here. The main problem with others finding out things you instinctively want to hide—‘errors’ in that broad sense—is that often you don’t benefit from what is revealed as much as they do. In fact, error is usually held against you: you’re penalized, blacklisted or imprisoned.
Yet those the inerrant beneficiaries might have committed a similar error under similar circumstances—and, at a more basic level, didn’t take the initial risk that generated the new knowledge related to the error! In effect, sheer inactivity accrues an unfair advantage. No discrimination is made whether those others who did not commit the error manifested wisdom, ignorance, cowardice, lethargy, etc.
This counterfactual consideration—that inerrancy itself masks an indeterminate and quite possibly dubious psychology—that has led me to conclude that the protection of privacy is much less important than the right for everyone to benefit from whatever there is to know. Hence, I’ve endorsed Wired magazine founder Kevin Kelly’s concept of ‘co-veillance.’
But deeper metaphysical issues follow, which we can’t explore here – in particular, whether actions are accorded the value they really deserve. On this more general point, I believe that normally we both reward and penalize agents too much. Thus, the Einsteins are over-rewarded and the Hitlers are over-penalized. This is understandable—and perhaps even welcomed as a first approximation in balancing the cosmic moral ledger, but it should not be left to stand as the final word.
In terms of modern ethics, our spontaneous tendency to both over-credit and over-blame people marks a deontological orientation, which is important to motivate action by presuming—certainly in extreme cases like Einstein and Hitler—a potentially godlike reach for the autonomous agent. However, a utilitarian orientation is then necessary to redress the balance, as we factor in the various contingencies that mitigate the efficacy of the agent’s actions. In this mode, we add in the costs of the successes and the benefits that we derive from the failures.
In terms of temporal horizons, our deontological judgements are very much like snapshots of the human psyche at the moment of decision. This helps to explain the specific criteria used to establish agency in juridical contexts. In contrast, our utilitarian judgements presume an indefinite time horizon, in which our understanding of the significance of human action improves, the longer the consequences play out – and all the mitigating factors come into view.
And so when your mentor Barry Allen portrays me as wishing philosophers to be sorting out the World Bank’s balance sheets, he’s got a point—and it’s too bad that he feels ashamed to take it with the seriousness it deserves. Would he prefer that philosophers leave the World Bank to its own devices or that we roll up our sleeves and try to improve their fallible attempts to deliver justice to the world?
Science as an institution deals with error ambivalently. On the one hand, it is very difficult to publish a simple refutation of an already published knowledge claim. You’ve got to refute in aid of promoting some alternative position, what Karl Popper—after Francis Bacon—called the ‘crucial experiment.’ It is here perhaps that the norms of science and those of philosophy diverge the most, since many philosophers make their entire careers out of refuting others without adding substantially to the positive body of philosophical knowledge. In principle, I am on the side of the scientists here.
However, in practice, this means that science creates an incentive simply to presume that whatever has passed peer review is a stable block of knowledge on which one can build without question. At least, that is the path of least resistance if you wish to make a mark in science.
So errors, including outright frauds, can go undetected for quite a long time. Nowadays we seem to be discovering more of these errors than in the past. My guess is that this is simply because money is involved and so there are now non-epistemic incentives to look for error – and also the presence of money incentivizes scientists to cut corners in the face of imagined competition.
To be sure, none of this is wonderful—but just how bad is it in the great scheme of things? It depends on the particular cases. Many of the cases that we would now call ‘frauds’ and take as a basis for humiliating scientists are really about scientists fabricating the set-ups and results of ideal experiments. In other words, they did what historians nowadays think that Galileo and Mendel did, but these icons of modern science escaped detection during their lifetimes.
Yet it doesn’t seem to matter much that Galileo and Mendel fabricated their science because their ‘intellectual intuition’ was basically correct and so others built on their work safely. Indeed, the fact that it took a while before their fabrications were suspected may have had a quite salutary effect on the overall history of science, given the radicalness of the claims that they were trying to make. In any case, we haven’t demoted Galileo or Mendel from the pantheon of great scientists.
Finally, I want to comment on the Hobbes-Boyle controversy as presented originally in Shapin and Schaffer’s Leviathan and the Air-Pump, and which, as you rightly point out, has influenced my thinking strongly. This book really led me to shift my view on Hobbes in a more positive direction—as it did to my student Bill Lynch, whose book Solomon’s Child is the best account of how the early Royal Society translated Bacon’s vision. 
The key thing is that when Hobbes talks about the ‘state of nature,’ he’s not talking about primitive humans. It’s an allegory of his own times, in which religious wars are blighting Europe. So Hobbes is actually talking about Christians who take the Bible into their own hands and need to decide for themselves what to believe and do.
In the first instance, established authorities are de-legitimated but then what steps in to fill the vacuum—other than endless conflict? The absolute monarch: the Leviathan. The monarch is ‘absolute’ by virtue of his ability to resolve the conflicting claims that people are making on the basis of their readings of the Bible (i.e. he absolves them of error and preserves what’s good). So the rational decision that people need to make in the state of nature is to trust this monarch as they would God to provide a peaceful ground for them to flourish despite their differences.
Easier said than done, of course, but it seems to me this is the point at which the state replaces the church as the corporate principle for ensuring and promoting humanity. Thus, when Hobbes refuses to accept that experimental demonstrations can settle metaphysical disputes, he is not being an anarchist.
On the contrary, he is objecting to the way that human power is being hidden through a kind of ventriloquism, whereby a ‘natural reading’ is projected on the experimental setting without anyone having to take personal responsibility. (It’s just nature speaking for itself!)
In short, in Hobbes’ eyes, the Royal Society was trying to erase the socially constructed character of its knowledge claims and hence render itself unaccountable as a body. Hobbes would have probably dealt with Boyle more easily, had Boyle been more upfront about the power issues involved in resolving knowledge claims.