This exchange first appeared in two parts on Adam Riggio’s blog “Adam Riggio writes”. On 30 April 2015, “Knowing Knowledge IX: Knowing Necessary Possibilities”, and on 2 May 2015, “Knowing Knowledge X: The End, But the Moment Has Been Prepared For”, Adam and Steve Fuller discussed Steve’s latest book Knowledge: The Philosophical Quest in History. Here, Part Eight (combining Riggio’s two entries), looks at the book’s conclusion, “Redeeming Epistemology from the Postmodern Condition”, and offers some final insights.
Although I loved our explicitly political discussion of the last couple of dialogues, I want to dive into the final installment of our exchange with some headier philosophy. I particularly want to discuss the power of counter-factual reasoning. Even though you consider this a foundational method for a progressive philosophy of science, I think it eclipses even your own vision. Counter-factual knowledge, I’d go so far as to say, makes a lot of your own vision obsolete.
The conclusion of Knowledge: The Philosophical Quest in History returns to the vision on which your early chapters focused, the unity of science in humanity’s conception of ourselves in the image of God. Your advocacy of this idea remains a point on which you and I will, I think, always disagree. But once I reached the end of your book, I had many more reasons for my disagreement.
The first such reason I want to discuss is the pragmatics of consilience. Scientific research, discoveries, institutions, and knowledge range over, as you describe in Chapter 6, all things for all people. Science is about the investigation of the world, of the systems, relationships, and bodies that constitute the world all together. Its mode of knowing the world is through investigating what must happen and what can change.
Counter-factual knowledge, in other words. This model of knowledge and reasoning achieves, on our own power alone, the “middle knowledge” that you say in your conclusion is the foundation of modern science as it arose in the West, in the Medieval cultural milieu.
The highest level of necessity in human reasoning is the strict necessity of syllogistic reasoning. A is B; B is C; therefore A is also C. There is another mode of knowledge that is far more contingent, counting as knowledge, but without any necessity at all. When I first met you in real life at the University of Toronto, I made note of many facts: your height, that (unlike many of the photos I had previously seen) you were now clean-shaven, and the tone of your speaking voice.
Between logical necessity and contingent fact is what you call Middle Knowledge, the knowledge of physical necessity, the knowledge of the laws of nature. Emerging from the Medieval milieu, Christian self-conceptions were required to ground our claim to genuine knowledge of physical necessity.
If humanity were entirely profane, then we would never be able to grasp any necessity in the universe at all. We’d be mere animals, drifting and distracted from moment to moment and event to event. Yet we aren’t so entirely divine as to have perfect knowledge of all the facts, changes, and relationships that constitute the universe. This would be the knowledge of the divine plan of being, where every fact is understood in its necessity.
I think of this along Spinozist lines, partially to needle you a little, given your previous comments to me that you could never be a Spinozist. But it also makes sense to me that the existence of the divine is necessity. It’s the space where I see some of our deeply held conceptions of divinity converging, one of the few spaces where they really do. As we understand the relations among all the bodies and processes of the universe, we become more divine ourselves.
But here’s where you and I differ. I don’t see why we need to be partially divine already to develop knowledge of physical necessities. Or rather, humanity’s status as the divine animal isn’t necessary to ground our capacity to know physical necessities. The reasoning structure of counter-factual knowledge, which you describe in delightful detail in Chapter 6, is a guide to such development. And we don’t need any underlying divine nature (at least no more divine than the rest of material existence) to have such a power. We figured it out ourselves.
A question related to the concept of humanity’s nature as the image of God still nags at me, and I think it always will. Why must the fact that the universe is intelligible imply that there is an intelligent designer? There is nothing about the existence of order that implies that such order is the product of design. Existence alone constitutes order through the dynamic relations of processes. Order figures itself out through time.
This is what I ultimately find so frustrating about your book and your larger philosophical projects as a public intellectual right now. I simply don’t understand why a simple notion like the logic of counter-factual knowledge can’t be a unifying principle for science, and why you think only the strong, deep concept of humanity being made in the image of God can do the job.
It isn’t just my own non-Christian sensibility that remains skeptical of whether this concept can hack it, but it would likely meet resistance from the non-Christian sensibilities of many practicing scientists around the world. I’m not just referring to the atheist or agnostic scientists who were still raised in a broadly Christian culture. I’m referring to scientists in Africa, Asia, and the Arab World who were raised in a Muslim religious tradition, or the large number of scientists throughout the West who come from long-standing Orthodox Jewish communities. The same goes for scientists whose cultural theologies are Hindu, Buddhist, or the a-religious cultural philosophies of China.
All these people throughout the world would be hesitant to throw in on a unifying concept for science that is rooted so firmly in a Western, European, and Christian theologico-cultural milieu and tradition. What comes naturally to you with your Jesuit education would be culturally alien to someone with an upbringing in Hindu or Confucian institutions.
What may have been the unifying principle of science at its origin is now a concept that will only sow division. In the words of a wise man, who I think was either Thomas Wolfe or B. A. Baracus, you can’t go home again.
❧ ❧ ❧
I’m glad that you picked up on the importance I place on what the later Scholastics called ‘middle knowledge’; namely, our capacity to reason to counterfactual states of the world based on our empirical knowledge. In this way, we might be able to bootstrap our way up to God’s universal knowledge.
Put in more modest and secular terms, we might come to know the laws of nature without having to experience every moment that is subsumed under those laws. Thus, experiments allow us to vary the conditions of the world—in our minds, in the lab and, increasingly, on the computer—so that we can simulate the requisite universality. Of course, experimental outcomes are notoriously fallible, and I want to say something about the significance of our fallibility later. But first I want to address the need for an intelligent designer who sets the gold standard against which to judge our efforts in this direction.
The first point to observe is that unless you believe that there is a being who could know all things in all space and time, it doesn’t even make sense to attempt to get at ‘laws of nature’ in the modern scientific sense—unless, of course, you were writing fiction. But the quest for laws of nature requires more than simply belief in such a being. It also requires that we are already sufficiently like that being that it is reasonable to think that the quest might just succeed. This link between us and the intelligent designer is especially important because the search for laws of nature, while building on ordinary empirical knowledge, quickly takes us away from it.
This is why I just suggested that the default human attitude to this project is to regard it as a genre of fiction. But experiments are not big video games or theatrical sets. They are models of physical reality. Without the theological scaffolding, such a conclusion would seem sheer lunacy—and this is how, I imagine, Aristotle (but not Plato) would have understood today’s science.
I believe that people fail to see this point because they haven’t considered what other rational grounds they might have to search for laws of nature prior to knowing any of the consequences that have made the project so empirically worthwhile over, say, the last four hundred years.
The answer would probably be none. And that was precisely the state of mind in which the original Scientific Revolutionaries found themselves. For them, the metaphysics of Christianity (especially the imago dei doctrine) led them to conclude that only specific Christian institutions—especially the Church—held them back from realizing something that their religion already told them was in principle within their reach, namely, absolution of Original Sin and reunion with God.
But this conclusion was ultimately a leap of faith on their part—and perhaps it is no accident that Pascal’s Wager as an argument for the existence of God emerges at this time. Albert Hirschmann tells a similar story in The Passions and the Interests about the early acceptance of capitalism as an economic ideology before it had proven itself as a reliable wealth producing engine.
You ask why Christian theology needs to be dragged into the logic of counterfactuals, and the answer is that there is no obvious ‘logic of counterfactuals’ independent of specific metaphysical assumptions. You’re wrong to think that Christians are alone in possessing the relevant basic metaphysics, though Christians have done the most to develop it.
As Abrahamic religions, Judaism and Islam also grant pride of place to humanity in divine creation based on the Garden of Eden episode and God’s uniquely direct address to humans. True, Christians have honed this point into a strong imago dei doctrine, which Judaism and Islam regard as controversial if not outright heretical, especially when it hints at the apotheosis of humanity, the point at which Christianity potentially slides into transhumanism.
This is light-years away from the metaphysical starting point of the great non-Abrahamic religions, which do not grant any cosmological privilege to the human condition whatsoever. Of course, there have been great scientists from India and China who have stuck to their native beliefs and not been converted to something more Abrahamic.
However, I would argue that these scientists got the relevant metaphysics through their ‘Western’ scientific training, which in turn has modified how their non-Western religiosity functions in their overall world-view. Sometimes anthropologists speak of this phenomenon in terms of ‘compartmentalization’ but it may be more subtle
A thoroughly secular debate over the exact metaphysics that underwrites counterfactuals stole much of the limelight in analytic philosophy in the 1970s, courtesy of Saul Kripke and David Lewis, two sharp-shooting Princeton logicians who overshadowed colleague Richard Rorty as he was putting the finishing touches on Philosophy and the Mirror of Nature. This debate left a strong impression on me, especially as it was presented in one of Jon Elster’s early books, the brilliant Logic and Society. In terms of how we’ve been discussing matters, the sense of ‘necessity’ that concerned Lewis was ‘logical,’ whereas Kripke’s was ‘physical.’
Basically, Lewis saw counterfactuals as self-consistent non-actualized states of the world, full stop. He wasn’t particularly concerned with how to get to such states of the world from the actual one. In fact, whenever Lewis discussed the ‘closeness’ of some possible world to the actual one, he would be simply referring to the number of properties that they shared.
His theory was not attuned to what economists call ‘the theory of the second best,’ whereby the second best policy may be radically different from the first best because what makes the first policy the best is how all its parts hang together. And if you’re missing some of the parts (or they’re not in the right proportion), then something completely different is better. This point, alien to Lewis’ purely logical analysis of counterfactuals, explains why the middle in politics is so often squeezed by the extremes.
Kripke was more interesting. His theory of possible worlds fits Bismarck’s famous definition of politics as ‘the art of the possible’. Kripke insisted that unrealized possible worlds had to be based in the actual world. This means that an explanatory narrative of some sort needs to be spun. In particular, we might recount how a possibility had been prevented but perhaps could be reactivated in the future.
All of this would involve looking at the resources available at various times for making things other than as they turned out to be. On that basis one could say how ‘far’ or ‘near,’ say, a desirable world was from the actual world at a given point in history—and this distance may vary, not necessarily always getting closer or farther away. We may well reach a point in the future where we can effectively recover a lost opportunity in the past.
To be sure, Kripke didn’t concern himself with any of the above details. He was simply interested in defining the sense in which it is reasonable to talk about ‘possible worlds’ as something other than pure fiction. And for him the bottom line was that a possible world is a possible version of the actual world, and hence a ‘counterfactual’ bears a stronger relationship to the ‘factual’ than the word ‘fictional’ normally implies.
But for me Kripke made only the first moves. Navigating between possible worlds has been central to my own thinking over the past quarter-century or more, and it is developed in some detail in Knowledge: The Philosophical Quest in History.
However, to take this line of thought seriously is to commit to the idea that it makes sense to imagine an intellect who can scope out all the various contingencies, based on trying to realize some ideal plan within the budgetary constraints that matter imposes, i.e. variable but limited resources. In other words, the intellect is an optimizer, who prioritizes goals, identifies appropriate trade-offs and adjusts to vicissitudes. God would have all these possibilities programmed into his intelligent design algorithm, but we humans normally experience it as history, in which case the point of philosophy and science is to discover the algorithm and, in the process, realize our own divinity.
This is a very brutally theological way of justifying our relationship with God—even by 17th century standards! But I think in secular form, it is also what Bismarck had in the back of his mind when he declared politics to be ‘the art of the possible’. He got it from Hegel, and Hegel reached back through Leibniz to Plato’s original conception of the philosopher-king, a member of a class of handpicked individuals who are trained to think like gods in case the day comes when they must function in that capacity.
To foreshadow my response to your final salvo on what you regard as my ‘political naïveté,’ consider two senses in which politicians may be said to ‘respond to events’. One reading of this phrase makes it appear that politicians simply adapt to circumstances, one after the other, without any sense of principle whatsoever. When we say that politicians are just in the business of staying in office, that’s what we mean. They just do what it takes to get the right number of votes.
However, an alternative reading suggests that, when responding to events, politicians have already anticipated the possibility of those events and hence are already prepared to do the appropriate thing to keep the forward momentum going on the ideals which they ultimately wish to promote. And this may involve what, on the surface, looks like a change in course of action.
Now, politicians may do all this more or less successfully because, in the end, they’re just politicians and not gods. But this is the aspiration. It also gets us back to a point I raised earlier, namely, that the Machiavellian maxim ‘the end justifies the means’, often used to damn politicians’ lack of principle, is in fact the modus operandi of how political principle is implemented in the world. In this respect, we might wish to give politicians a bit more credit for intelligence when they say that their plans are working even though it looks like they’ve made a U-turn at a crucial juncture.
One person who I think understood all this very well was the great US theorist of journalistic ‘objectivity,’ Walter Lippmann. He saw the journalist as someone whose presentation of the news should reassure the public, in order to allow politicians the private space to race through various hypothetical scenarios as they decide what to do next: a calm exterior masking a dynamic interior.
This was the process that at the height of the Cold War Stanley Kubrick’s Dr Strangelove immortalized in satire and Erving Goffman generalized into a sociology of the ‘front’ and ‘back’ regions of everyday life. In public relations, it’s called ‘impression management’ and when done properly it is a means to an end, not an end in itself.
Lippmann’s divided self for political conduct may be seen as the mirror image of God’s dual self-presentation through nature: Instead of a calm exterior, nature inspires authority through its surface volatility as something ‘beyond our control’. However, beneath that volatility is a set of laws which science is in the business of discovering—perhaps in a less frantic and more rigorous way, yet nevertheless along the same experimental lines as the juggling of contingencies that transpire behind the political scenes so jealously guarded by Lippmann.
Put it this way. Both the politician’s appearance of calm and nature’s appearance of volatility are deceptions of a sort. The politician is really less placid than he appears, while nature is really less unruly than it seems. The frantic activity behind the appearances in the first case is in search of the secret to the underlying order in the second case.
It may be that the various controversies surrounding ‘climate change’ are in the process of unravelling this delicate balance of knowledge and ignorance that has enabled something like Goffman’s front/back stage distinction to manage our understanding of both politics and nature in the modern era.
However, I don’t wish to dwell on this point here, but turn instead to something that drives the prominence of counterfactual thinking in my work. It’s what I take to be Hegel’s great counter-intuitive point about history. To have a rational account of history, you need to assume the arbitrariness of the decision points after which someone has won or lost—and as a result history goes in a one direction rather than another. Your rationality lies in how you cope with the arbitrariness, either as winner or loser.
After all, the winners aren’t guaranteed indefinite success simply by repeating their winning actions, and the losers might have eventually won, given a different moment of decision, different resources, different evidence, different institutional arrangements, etc. Indeed, descendants of the losers might well overturn the winners in the future. But it all depends on how these parties learn from their world-historic success or failure.
And Popper would agree with all this too. After all, Popper never said that losers had to roll over and play dead! Rather, they had to re-organize themselves so as to overcome the original criticism and do things of value that their opponents cannot. This is not as hard as it might first sound, if you consider the arbitrariness of the original moment of decision.
Perhaps the biggest misunderstanding that people have about Hegel—and here I’m thinking of Thomas Carlyle’s ‘pop Hegelian’ view of the ‘hero’ in history—is that there is some luminous relationship between a world-historic agent and the ends of history itself. To be sure, the young Hegel regarded Napoleon as ‘the man of the hour’. But from the standpoint of world-history, Napoleon was simply a signal, a marker, a way station, not necessarily an exemplar of things to come.
This point is especially controversial in a Christian context, where Christians have tended to think that Jesus wanted his followers to live as he did, with his overriding sense of social justice on the basis of which he placed his own life at risk, which eventuated in his Crucifixion. Thus, church history and dogmatic theology largely consist of stylisations—and, dare I say, dilutions—of the life of Jesus, as recounted in the Gospels, designed for easy mass consumption.
Given this rather flat-footed but institutionally effective strategy for ‘following in Jesus’ footsteps,’ it is easy to see why Hegel’s theological followers—the ‘Young Hegelians’ of Marx and Engels’ German Ideology fame—were considered so politically subversive in the 1830s and 1840s, when they proposed ‘naturalistic,’ ‘symbolic,’ and otherwise ‘demystified’ readings of the life of the Jesus.
However, my point is somewhat different from theirs. I am not so worried about what it would mean for the legitimacy of Christianity if the Gospel accounts of Jesus’ life turn out to be substantially false, thereby undermining the epistemic foundation of, say, the Petrine papacy. Rather, I am more concerned with the meta-level question of what exactly about Jesus’ life (even granting our accurate knowledge of it) might be worth carrying forward as ‘exemplary.’
This was a question that the Franciscan order has struggled with throughout its history. As a result, it has often found itself on the heretical side of things. After all, everyone’s life is a product of its time, and as time goes on it becomes intuitively harder to draw clear lessons from what Jesus did in his day to what we should do in ours. Call it the ‘problem of existential induction.’
Finally, let me close this round on something you and I may agree on: Academic training normally blinds one to the problem of existential induction, as it effectively gives one a vested interest in the future imitating the past. This leads academics to overestimate their own powers of judgement, which encourages them to dismiss empirical anomalies and other disruptions to the status quo as simply ignorable local disturbances, perhaps to be blamed on idiosyncratic personalities.
Here academics confuse being well-informed (i.e. knowing the trends and having the right views) with understanding the full potential of the fields that they’re in. To understand the full potential, one needs to think more ‘counterfactually’ about how earlier initiatives managed to fail. Normal academics presume that it was because they were shown to be conclusively false.
But they may have failed simply because the proponents did not try to mount a 2.0 in light of the first wave of criticism. And by 2.0, I don’t just mean ad hoc hypotheses, but a reasonably substantial reconfiguration that enables the supposedly defeated theory to say something new that the opponents cannot. This is why I believe that the only way to rationally mount a future-oriented programme is by learning from the past.
❧ ❧ ❧
Even though my old supervisor Barry Allen’s incredibly harsh review of your book put me in a bit of an awkward position at the SERRC, I do agree with him about one important point. I am genuinely impressed by your analysis and conceptual creativity in Knowledge. But I am equally frustrated with a naïveté that I see throughout the book and our correspondence.
You dismiss the relevance of climate change denial because it isn’t scientifically rigorous. But such casual dismissal gives free rein to politicians who turn denial of climate change into concrete policy, even in states like Florida that are in danger of being swallowed in a rising ocean.
Same with the imbecilic Biblical literalism that smears the intelligent design science in which you see such merit. You dismiss them as uninformed cranks who don’t even understand their own religion. But activist evangelical Christian politicians are leading a successful campaign to replace high school biology textbooks throughout the United States with dogmatic tripe that devalues evolutionary theory as the baseless speculation of a godless amoralist and contains illustrations of neanderthals riding a triceratops.
These radicals who pervert your religion are not weirdos in the wilderness. They are people like Rick Perry, Bobby Jindal, Nikki Haley, Mike Huckabee, and other governors of American states, some of whom have a decent chance of becoming that country’s President next year.
They have so much money and media in their corner that the more sensible intelligent design theorists we’ve discussed—critics of orthodox Darwinism who see the practical merit of design principles in understanding biological order—have no real power to advocate for their ideas. Their voices are completely drowned out by the organizational and financial power of the radical evangelical ignorati of the United States.
When I read Knowledge, I know I’m reading the work of a contemporary genius to whom few in a comparable position today are equal.  But I suspect that I’m also reading the work of a man who is growing disconnected from the real conflicts that define our planet’s politics. I don’t know if it’s an effect of the academic ivory tower, or there is some more complicated cause, but your naïveté is growing dangerous.
There’s one last point that I want to raise, appropriately about what we actually do when we write philosophy, and what our philosophical texts are for. Your conclusion goes back through the conflict that exploded every discipline where philosophy and science intersect: the Sokal Hoax.
Your book’s conclusion makes one of the most genuinely delightful critiques of Sokal that I’ve ever come across, because not only does it make him look like the fool that he was, but it also articulates a productive creative vision for what purpose philosophy’s engagement with scientific disciplines and knowledge should have.
You identify that Sokal took humanities practitioners to task for not being familiar enough with the high-end, technical concepts and details of contemporary quantum physics to write on it with the proper expertise to be considered experts. Those who defended the humanities’ claims to articulate scientific concepts through their perspectives said, unfortunately falsely, that they did have such expertise.
But the humanities practitioner is not writing to the technical expert audience in the scientific discipline about which she writes. At least, this is how you describe what she should have told Sokal. Philosophy should try to craft the new metaphors and cultural frameworks through which the broad populace should receive the technical insights of a scientific discipline. Those technical ideas become the general metaphors having political and social relevance.
I am extremely doubtful that any theoretical physicist of the 1700s literally thought that the universe was a clockwork mechanism. This was a metaphor, a culturally powerful image crafted in philosophical discourse, that articulated a key insight of Newtonian physics into the popular conception of the universe.
This is, in essence, part of what my own upcoming book, Ecology, Ethics, and the Future of Humanity, available this August from Palgrave MacMillan publishers, does. I guide my readers through ideas that are important to ecological science about the interdependence of living bodies, species, and communities, about the nature of existence as processes, and about the integration of those processes in a dynamic whole whose individual parts retain and enhance their own power through that connection.
Those ideas, when we hold onto them while stepping back from the technical specifics of ecological and biological sciences themselves, are politically transformative. We all depend on each other to survive and thrive in a harsh world. All things are subject to flux and change, making a mockery of conservatism for the sake of heritage alone. There is nothing in our lives that escapes real connection with others, no matter how much we try to hide in our bubble of ignorance as discrete individuals.
These concepts aren’t for the technical discussion of investigations and mathematical models in ecological science. They’re philosophical, social, and political insights about the ecological character of the world. This is what philosophy is for.
I hope our last two months of dialogue has achieved some of that same iconoclasm.
❧ ❧ ❧
I find it touching your concern about the state of my political awareness. If it’s any consolation, I also find your sense of politics naïve. In fact, from our exchange I am not even sure that you have a clear understanding of politics, but maybe that all will be revealed in your heavily trailed new book.
For me politics is ultimately about re-making the world in the image of one’s ideals. This raises two sorts of empirical challenges: One involves operationalizing those ideals—i.e. would you be able to recognize utopia if you saw it? The old saying, ‘Be careful what you wish for’, captures the nature of this challenge. Ideals that look great when cast at an abstract distance may turn out to be nightmares when observed in practice.
The political theorist Steven Lukes wrote a novel of ideas nearly twenty years ago, called The Curious Enlightenment of Professor Caritat, that illustrated this point beautifully in a style that was half-Candide and half-Gulliver’s Travels. Basically, Caritat (the real name of the Marquis de Condorcet, as it turns out) has been sent by his war-torn society to scour the world to discover the political order that works best. The usual suspects are canvassed—utilitarianism, communitarianism, libertarianism, proletarianism. However, in his travels Caritat keeps discovering that what sounds great as ideological pronouncements result in all sorts of ironic consequences, which continually get Caritat into trouble with the local authorities.
The other empirical challenge posed by politics involves calibrating means to ends. After all, the direct route is not necessarily the most effective. This is perhaps where we most seriously disagree. You seem to advocate what Joseph Schumpeter cynically called a ‘safety valve’ view of politics, in which if people state what is wrong with the current order openly and articulately enough, change will be forced to happen.
Originally this point applied to the emerging intelligentsia in 18th century Europe. But in the 19th and 20th centuries, it was gradually extended to freedom of press and assembly, and—notoriously inscribed in the Weimar Constitution—right to public demonstration, the legal crucible in which today’s identity politics was forged.
The problem with this view is that reduces politics to the art of complaining, occasionally spilling over into disruptive behaviour. It is a largely self-consuming activity, ‘politics as pure performance’, if you will. It never gets to the next stage of constructing a desirable new order. In fact, if history is our guide, the resulting mayhem simply de-stabilizes the existing order, allowing a well-placed third party—that is, someone previously marginal to the proceedings—to capitalize on the situation. Georg Simmel dubbed it the tertius gaudens, the beneficiary of others’ misery.
This lesson was learned in the 19th and 20th century by ambitious foreign powers who began to export ‘change agents’ to places where it hoped to exert lasting influence. The Cold War marked the high watermark of this activity, when both the Americans and the Soviets routinely ‘parachuted’ specially trained change agents. The late 1950s B-movie, The Invasion of the Body Snatchers, captured this sentiment in all its paranoid glory.
Thus, there is nothing subversive in allowing a Slavoj Zizek or, for that matter, a Thomas Piketty to address hundreds of university people at a time about the continuing relevance of Marx—especially if it’s the original 19th century Marx. These people are effectively like Rolling Stones tribute acts.
They enable the audience to simulate a sense of radical freedom that dissipates as soon as they leave the venue because there is nothing outside the venue to sustain the sentiment cultivated there. There is no plan, no candidate, no strategy: just more or less pleasant noise. Nevertheless, it is true that they generate a minor level of social instability that others may be in a position to capitalize on.
Given that you’ve detached yourself from academia and hence can write in a ‘free’ capacity, you too are contributing—albeit in a quite minor way—to this instability. One of the unforeseen consequences of making politics ‘public’ is that it becomes easily virtualized. Thus, people can do ‘gameboy’ politics in their home as they talk back to their television screens—a familiar trope in my childhood—or sound off on Facebook, Twitter and blogs.
And this is what you seem to spend much of your time doing. The activity does little in terms of the goals it purports to be advancing, but it does provide the opportunity for others to appropriate that activity for their own purposes. Public opinion polling and market research are just the most mundane examples.
Put in more world-historic terms, the beneficiary of such opportunities is more likely to be Google automatically scooping up all your data than a Marxist organizer lurking outside your workplace ready to make a pitch.
Let me put the point bluntly: If you don’t have a reasonably clear agenda—that is, clear enough to tell whether or not you’re getting closer to realizing your major goals—then you’re doing nothing more than providing data for others to harvest: Use it or lose it!
From about 1850 to 1950, it was common for social theorists to bemoan the ‘herd mentality’ of the masses newly allowed to vote, enter in private contracts, etc. It was thought that such people, unleashed from prior paternalistic structures, would simply end up gravitating to the lowest common denominator. But really, all it meant was that people were biddable. It was the (perhaps reluctant) recognition that people have a dynamic sense of themselves, which leads them to be open-minded about the future.
While anyone wishing to lead people into the future needs to know how people think about their past (be it positive or negative), the bottom line is whether they can be led to somewhere better. Originally demagogues stepped into the breach, often brutally. However, in the 20th century their way was increasingly paved by pollsters and market researchers.
Thus, the demagogue’s idiosyncratic charisma has yielded to a media-friendly personality who can show how people’s lives haven’t been a waste of time while opening up the prospect of a full realization of what they’re already trying to achieve. In this respect, social scientists have sublimated demagoguery into the ready market for politicians that we see today.
My point here is that those social scientists who conduct the polls and surveys will probably know before you do what sort of candidate will appeal to you at an election where people who think like you have reached a ‘critical mass’ to affect the electoral outcome. If no such candidate looms on your personal horizon, then you probably don’t matter—yet. Everyone—including your good self—has a vested interest in how everyone else thinks, regardless of how they go on to use that information.
In terms of my own politics, as I’ve said before, I play a long game. The UK Fabians provide a good model, one most effectively emulated by the Mont Pelerin Society, which incubated for four decades before neo-liberalism became the world’s dominant ideology.
It is quite common in intellectual history for good ideas to be initially promoted by ‘bad people’. I have made it clear what I do and do not support. So while I am friendly terms with members of, say, Seattle’s Discovery Institute, which promotes intelligent design, I’ve never been invited to become a fellow—and nor would I accept an invitation. These lines are pretty clear in my mind, and I’ve made them when required to do so.
Of course, my actions have not been rhetorically successful—at least, insofar as some ostensibly intelligent people continue to condemn me simply for associating with people with whom I share an intellectual affinity though not their larger political agenda. But for me, their response simply provides evidence of the default level of hypocrisy in academia, especially when it comes to doing anything that might verge on politics. Two aspects of this phenomenon are worth noting.
First, philosophers always make a big deal about the ‘genetic fallacy,’ namely, that the origins of an idea bears no necessary relationship to its validity. Fair enough. Too bad this principle is enforced in such an obviously self-serving way.
The fallacy itself is normally credited to Ernest Nagel and Morris Cohen’s An Introduction to Logic, a famous US textbook first published in the early 1930s, though it is related to the contexts of scientific discovery / justification distinction that Popper and Reichenbach were promoting the same time. And all of them were ultimately drawing on William Whewell, the 19th century cleric coined the word ‘scientist’ in English for someone who not only discovered things but who knew the general principles required to justify them to the widest possible audience.
The classic example of the fallacy was the Nazi dismissal of relativity theory on the grounds that its founder and main promoters were Jews. But after the Nazis were defeated, the fallacy was equally committed by those who wished to stop genetics research because most of its leading researchers (regardless of their views on Nazism) were advocates of some form of human eugenics. Luckily that didn’t come to pass.
Unfortunately the current interpretation of the US Constitution with regard to the separation of Church and State enshrines the genetic fallacy in law. In effect, if a theory is shown to be religiously motivated, it cannot be taught as science. And it is certainly true that intelligent design has been—and remains—a religiously motivated view.
But what if as a matter of empirical fact, it is unlikely that we would have modern science, were it not for the global imposition of the anti-authoritarian version of the Christian world-view which Knowledge: The Philosophical Quest in History foregrounds?
This brings up the second point concerning the hypocrisy of academics. What you say in a seminar or a journal should be good enough for the courtroom and the media.
Anyone who has studied the history and philosophy of science in any detail knows that the 17th century Scientific Revolution in Europe was the product of a very particular theological configuration that was pro-Christian yet anti-establishment. They also know that China was by far the wealthiest and most technologically advanced nation at the time—and arguably remained so until the end of the 18th century. Clearly something in Christian theology made the difference between the fates between these two great regions in the world.
Moreover, the theology made a bigger difference as it was detached from its original institutional expression, which is the ‘Enlightenment project’ in a nutshell. Thus, the privileging of humanity remained but without the hyperbolic reminders of our ‘fallen’ character that had been designed to shore up clerical power. And I’m happy to admit that this trajectory very quickly moved in successive waves of secularisation and democratisation from Enlightenment to Imperialism to the Cold War, etc.
I would take my detractors more seriously if they came up with alternative scenarios—‘normatively preferable counterfactuals’, if you will—by which we would have arrived at something at least as attractive as where we find ourselves now. Just to be clear: I’m happy to grant that people today could thrive in worlds that significantly vary from the actual one.
The question is whether starting with, say, Chinese assumptions would ever get us to such a world. If not, then all the China-talk is simply about wanting people to be other than they have demonstrated themselves to be. In that case, a serious (aka coercive) political programme is required to redress the situation—not a lot of metaphysical gas that smells like Spinoza.
 ‘Curiosity,’ the snake oil of naturalized epistemologies, doesn’t explain the intergenerational persistence of science in the face of its own long-standing unsolved problems and collateral damage to the world. intergenerational persistence of science in the face of its own long-standing unsolved problems and collateral damage to the world.