Archives For Climate Change

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Jeroen Van Bouwel, Ghent University, Belgium, jeroen.vanbouwel@ugent.be; Michiel Van Oudheusden, SKC-CEN Belgian Nuclear Research Centre and University of Leuven, Belgium.

Van Bouwel, Jeroen and and Michiel Van Oudheusden. “Beyond Consensus? A Reply to Alan Irwin.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 48-53.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Pq

Please refer to:

Image from Alex Brown via Flickr

 

We are grateful to Alan Irwin for his constructive response, “Agreeing to Differ?” to our paper and, notwithstanding differences between his view and ours, we agree with many of his comments. In this short rejoinder, we zoom in on the three main issues Irwin raises. We also use this opportunity to highlight and further develop some of our ideas.

The three issues Irwin brings up are:

How to understand consensus? Rather than, or along with, a thin ‘Anglo-Saxon’ sense of consensus as mutual agreement, one could adopt a thick conception of consensus, implying “faith in the common good and commitment to building a shared culture” (Irwin 2017). The thick sense (as enacted in Danish culture) suggests that disagreement is an integral part of consensus. Therefore, we would do well to pay more attention to conflict handling and disagreement within consensus-oriented discourse.

Why are so many public participation activities consensus-driven? We should question the institutional and political contexts within which consensus-seeking arises and how these contexts urge us to turn away from conflict and disagreement. And, why do public participation activities persist at all, given all the criticism they receive from various sides?

Should we not value the art of closure, of finding ways to make agreements, particularly in view of the dire state of world politics today?

These are legitimate questions and concerns, and Irwin is right to point them out. However, we believe some of the concepts discussed in our paper are helpful in addressing them. Let us start with the first issue Irwin raises, which we will link to the concept of meta-consensus.

Meta-Consensus

It is indeed helpful to draw a distinction between the thinner Anglo-Saxon sense of consensus and the thicker sense of consensus as faith in the common good, as Irwin suggests. In the latter sense, disagreement and dissensus can be seen as part of the consensus. We fully agree with Irwin that consensus and dissensus should be thought together rather than presented only in terms of contradiction and opposition. This is why we analytically distinguish a (simple) consensus from a meta-consensus.

As we sketch in our article, at the simple level, we might encounter disagreement and value pluralism, whereas at the meta-level, the meta-consensus provides a common ground for participation by explicitly or implicitly laying out the rules of engagement, the collective ways to handle conflict, and how to close or disclose discussion. The meta-consensus also impinges on the scope of issues that is opened to discussion, who may or may not participate, the stopping rules, the structure of interaction, and the rationales and procedures that guide participation in general.

We have sought to put this meta-consensus center stage by comparing and contrasting how it is enacted in, or through, two participation cases (participatory TA and the NIH consensus conference). In this way, we seek to give due attention to the common ground that enables and constrains consensus and dissensus formation, and to different institutional designs impinging on participation, without insisting on the necessity of a simple consensus or the need for closure.

Drawing attention to the meta-consensus that governs participation may help to facilitate more reflexive modes of engagement that can be opened to joint discussion rather than imposed on participants. It should also help participants to better understand when and why they are in disagreement and determine courses of action when this is the case. As such, it may contribute to “building a shared culture” by facilitating and by establishing a shared adhesion to the principles of inclusion, mutual listening, and respect (cf. Horst and Irwin 2010; Irwin 2017). However, we believe it is equally important to emphasize that there is always the possibility of dissensus, irreconciliation, and further conflict.

As we see it, entertaining this possibility is an important prerequisite or condition for genuine participation, as it creates an open and contested space in which participants can think, and engage, as adversaries. Thus, we concur with Irwin that consensus and dissensus both have a place in public participatory exercises (and in the public sphere more generally). However, when we are faced with a choice between them (as with fundamental disputes, such as those over abortion or human enhancement), we must carefully consider how, whether, and why we seek (dis)agreement. This is not to argue against consensus-seeking, but to insist on the importance of constructing and sustaining an agonistic, contestable order within participation.

Different Democratic Models of Participation

Irwin appropriately proposes to reflect more on the institutional and political contexts in which participation is organized. The question why we aim for consensus in public participation activities, as well as the broader question of why public participation activities persist at all, do indeed deserve more attention. We have not addressed these questions in our paper, but we do think being more explicit about the aims of participation is an integral part of the approach that we are advocating. In order to discuss and choose among the different democratic models of participation (aggregative, deliberative, participatory, and agonistic), it is imperative that we understand their political, economic, and social purposes or roles and make these explicit.

Similarly, we may ask how the models serve different aims within specific institutional and political contexts. Here, the notion of political culture springs to mind, as in our region (Flanders) and country (Belgium), conflicts and divisions between groups are often managed through social concertation between trade unions, employers’ organizations, and governments. This collective bargaining approach both challenges and complements more participatory modes of decision making (Van Oudheusden et al. 2015). As mentioned earlier, we do not consider this issue in our paper but it is well worthy of further reflection and consideration.

Irwin also wonders whether policy makers might think our concepts and models of participation miss the point as many of them see it. It is an interesting question (we wonder whether Irwin has any particular cases in mind), but one thing we can do is to insist that there is no one-size-fits-all approach to participation. Different options are available, as each participation model has strengths and weaknesses. It seems important to us to attend to these strengths and weaknesses, as the models designate roles and responsibilities (e.g. by specifying who is included in participation and how), foresee how the collective should interact and indicate what kinds of results may ensue from participatory practice. By juxtaposing them, we get a better picture of how problems, contexts, and challenges are framed and handled differently within each participatory setting. As making trade-offs between approaches is at the heart of policymaking, we invite policy makers (and decision makers more broadly) to explore these settings with us, and carefully consider how they embed multiple social and techno-scientific values and orientations.

Disclosure

As Irwin rightly notes in his reply, we do not propose one final alternative to existing practice but entertain the possibility of mobilizing more than one model of democracy in participation. This implies that we also allow for a consensual approach when it is warranted. However, in developing ideals that contrast with consensus, we open onto disclosure and a more agonistic appraisal of participation, thereby abandoning the ideal, and appeal, of final closure. In response to this move, Irwin wonders whether we should not value the art of closure, especially in these times. While we agree on the dire state of world politics, we are not convinced that replacing closure by disclosure would aggravate the present situation. Perhaps the contrary is true. What if the quest for consensus brought us to this situation in the first place?

As the political theorist Chantal Mouffe argues, in a world of consensual politics (also characterized as neoliberal, de-politicized or post-political, or in Mouffe’s words as a “politics of the center”), many voters turn to populists to voice their dissatisfaction (Mouffe 2005: 228). Populists build on this dissatisfaction, publicly presenting themselves as the only real alternative to the status quo. Thus, consensual politics contributes to hardening the opposition between those who are in (the establishment) and those who are out (the outsiders). In this antagonistic relation, the insiders carry the blame for the present state of affairs.

This tension is exacerbated through the blurring of the boundaries between the political left and right, as conflicts can no longer be expressed through the traditional democratic channels hitherto provided by party politics. Thus, well-intentioned attempts by “Third Way” thinkers, among others, to transcend left/right oppositions eventually give rise to antagonism, with populists (and other outsiders) denouncing the search for common ground. Instead, these outsiders seek to conquer more ground, to annex or colonize, typically at the expense of others.

Whether one agrees with Mouffe’s analysis of recent political developments or not, it is instructive to consider her vision of radical, agonistic (rather than antagonistic) politics. Contrary to antagonists, agonistic pluralists do seek some form of common ground; albeit a contested one that is negotiated politically. In this way, agonists “domesticate” antagonism, so that opposing parties confront each other as adversaries who respect the right to differ, rather than as enemies who seek to obliterate one another. Thus, an agonistic democracy enables a confrontation among adversaries – for instance, among liberal-conservative, social-democratic, neo-liberal and radical-democratic factions. A common ground (or meta-consensus) is established between these adversaries through party identification around clearly differentiated positions and by offering citizens a choice between political alternatives.

To reiterate, antagonistic democracy is characterised by the lack of a shared contested symbolic space (in other words, a meta-consensus) and the lack of agonistic channels through which grievances can be legitimately expressed. This lack emerges when there is too much consensus and consensus-seeking, as is arguably now the case in many (but not all) Western democracies. We therefore need to be explicit about the many aspects and different possible democratic models of participation. Rather than emphasize the need for more consensus and for closure, we would do well to engage with the notions of dissensus and disclosure.

This, in our mind, seems to be a more fruitful venue to sort out various political problems in the long run than attaching to the ideal of consensus and consensus-seeking. Disclosure keeps the channels open. It is a form of opening joint discussion on the various models of participation, not with the aim of inciting endless debate but of making the most of them by reflectively probing their strengths and weaknesses in specific situations and contexts. Rather than aiming for closure beyond plurality, it urges us to articulate what is at stake, for whom, and why, and what types of learning emerge in and through participation. It should also increase our understanding of what “game” – participatory model – we are enacting.

Beyond Consensus?

At the end of his response, Irwin raises the very pertinent question as to whether we need more disclosure now around climate change. For Irwin, “certain consensual ideals seem more important” (Irwin 2017). There are many aspects to Irwin’s big question, but let us pick out a couple and start sketching an answer.

First, calling for a consensual approach or a consensus regarding climate change risks backfiring. A demand for consensus in science may lead to more doubt mongering (cf. Oreskes & Conway 2010), not so much because of disagreement among scientists, but due to external pressures from various lobby or pressure groups that gain from manufacturing controversy (e.g. industry players and environmental NGOs). A lack of scientific consensus (within a framework that emphasizes the importance of achieving a scientific consensus) might, in turn, be used by politicians to undercut or criticize science or policies based on scientific evidence and consensus. Even the slightest doubt about a claimed consensus may erode public trust in climate science and scientists, as was the case in 2009 with Climategate.

Second, the demand for consensus in science might also set too high expectations for scientists (neglecting constraints on all sides, such as lack of time, scientific pluralism, and so on) and suggest that dissent in science is a marker of science failing to deliver.

Third, granting too much importance to scientific consensus risks silencing legitimate dissent (e.g. controversial alternative theories), whereas dissent and controversies also drive science and innovation. (There are, as we all know, many important scientific discoveries, paradigms, and theories that were for a long time ignored or suppressed because they went against the prevailing consensus.)

We are thus led to say that seeking a consensus on climate change does not result in effective policies and policymaking. Taking to heart Irwin’s plea “to imagine the kinds of closure which might be fruitfully established,” we think it is important to ask if closure here necessarily unfolds with consensus seeking, and if so, how consensus is best understood. Finding ways to break the antagonism invoked by a (depoliticized) scientific consensus on climate change may ultimately be more fruitful to forge long-term durable solutions among particular groups of actors, something that might be done by publicly disclosing the divergent agendas, stakes, and power mechanisms at play in “climate change.” (A scientific consensus does not tell us what to do about climate change anyway.)

Seen in this way, and again drawing on Mouffe, an agonistic constellation might have to be put in place, where disclosure challenges, or even breaks, the sterile opposition between outsiders and insiders. This is because disclosure requires that insiders clearly distinguish and differentiate their policies from one another, which urges them to develop real alternatives to existing problems. Ideally, these alternatives would embed a diversity of values around climate change and engender solutions that make use of the best available science without threatening a group’s core values (cf. Bolsen, Druckman & Cook 2015).

To give some quick examples, a first example could center on reducing the amount of carbon dioxide and other greenhouse gases by adjusting consumption patterns; a second could insist on private enterprise-driven geo-engineering to mitigate global warming and its effects (e.g. technology to deflect heat away from the earth’s surface); a third alternative on making cash from carbon by emission-trading systems; a fourth on moving to Mars, etc.

Whichever political options are decided on, we again emphasize the importance of questioning the rationales and processes of consensus-seeking, which to our mind, are too often taken for granted. Creating a more agonistic setting might change the current stalemate around climate change (and related wicked problems), by re-imagining the relationships between insider and outsider groups, by insisting that different alternatives are articulated and heard, and by publicly disclosing the divergent agendas, stakes, and power mechanisms in the construction of problems and their solutions.

Conclusion

Thanks in large part to Alan Irwin’s thoughtful and carefully written response to our article, we are led to reflect on, and develop, the concepts of meta-consensus, disclosure, and democratic models of participation. We are also led to question the ideals of consensus and dissensus, as well as the processes that drive and sustain them, and to find meaningful and productive ways to disclose our similarities and differences. By highlighting different models of democracy and how these models are enacted in participation, we want to encourage reflection upon the different implications of participatory consensus-seeking. We hope our article and our conversation with Irwin facilitates further reflection of this kind, to the benefit of participation scholars, practitioners, and decision makers.

References

Bolsen, Toby; James Druckman, and Fay Lomax Cook. “Citizens’, Scientists’ and Policy Advisors’ Beliefs about Global Warming.” Annals of the AAPSS 658 (2015): 271-295.

Horst, Maja; and Alan Irwin. “Nations at Ease with Radical Knowledge: on Consensus, Consensusing and False Consensusness.” Social Studies of Science 40, no. 1 (2010): 105-126.

Irwin, Alan. 2017. “Agreeing to Differ? A Response to Van Bouwel and Van Oudheusden.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 11-14.

Oreskes, Naomi and Erik Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. London: Bloomsbury Press.

Mouffe, Chantal. “The Limits of John Rawls’ Pluralism.” Politics, Philosophy and Economics 4, no. 2 (2005): 221-31.

Van Bouwel, Jeroen and Michiel Van Oudheusden. 2017. “Participation Beyond Consensus? Technology Assessments, Consensus Conferences and Democratic Modulation.” Social Epistemology 31(6): 497-513.

Van Oudheusden, Michiel, Charlier, Nathan, Rosskamp, Benedikt & Pierre Delvenne. 2015. “Broadening, Deepening, and Governing Innovation: Flemish Technology Assessment in Historical and Socio-Political Perspective.” Research Policy 44(10): 1877-1886.”

“Knowledge of Climates and Climates of Knowledge”, Amanda Machin, Zeppelin University

cv_image

Image credit: Chris Cheung (Ping Foo), via flickr

The changing climate has attracted attention from numerous fields and disciplines. Part of its intrigue lies in the impossibility of boxing it into one area of knowledge and treating it with conventional methods. The entangled complex of issues that comprises climate change has disrupted and to some extent transfigured traditional linear conceptions of the connection between science and society. Queries regarding what expertise consists of, how it is communicated and the ways in which it might be incorporated into democratic processes have found no easy answers. What these questions have done is to undermine the simplistic assumption that scientists can straightforwardly impart instructions regarding not only what should be done, but also regarding what can be done to mitigate and alleviate massive environmental upheaval. Please read more …

Author Information: Adam Riggio, McMaster University, adamriggio@gmail.com; Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-22f

Editor’s Note:

Adam Riggio

I‘d like to talk with you about two things. One is to ask you a practical political question, and the second is to have a wider discussion about how philosophy of science and scientific practice influence each other. I’ll start with the practical political question first, because one of the first lessons in writing for the web is to headline your most sensationalistic point.  Continue Reading…