Archives For collective epistemology

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45p

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons

 

In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details: rsassowe@uccs.edu

References

Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Heidegger Today, Paolo Palladino

SERRC —  August 23, 2018 — 1 Comment

Author Information: Paolo Palladino, Lancaster University, p.palladino@lancaster.ac.uk

Palladino, Paolo. “Heidegger Today: On Jeff Kochan’s Science and Social Existence.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 41-46.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40b

Art by Philip Beasley
Image by Sean Salmon via Flickr / Creative Commons

 

I have been invited to participate in the present symposium on Jeff Kochan’s Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge. I would like to preface my response by expressing my gratitude to the editors of Social Epistemology for the opportunity to comment on this provocative intervention and by noting the following about my response’s intellectual provenance.

I have long worked at the intersection of historical, philosophical and sociological modes of inquiry into the making of scientific accounts and technological interventions in the material world, but at an increasing distance from the field of science and technology studies, widely defined. As a result, I am neither invested in disciplinary purity, nor party in the longstanding arguments over the sociology of scientific knowledge and its presuppositions about the relationship between the social and natural orders.

I must also admit, however, to being increasingly attracted to the ontological questions which the wider field of science and technology studies has posed in recent years. All this is important to how I come to think about both Science as Social Existence and the argument between Kochan and Raphael Sassower over the merits of Science as Social Existence.

Kochan’s Problems of the Strong Programme

As the full title of Science as Social Existence evinces, Kochan’s principal matter of concern is the sociology of scientific knowledge. He regards this as the field of study that is dedicated to explaining the production of knowledge about the material world in sociological terms, as these terms are understood among proponents of the so-called “strong programme”. As Kochan’s response to Sassower conveys pointedly, he is concerned with two problems in particular.

The first of these is that the sociology of scientific knowledge is hostage to a distinction between the inquiring subject and the objective world such that it is difficult to understand exactly how this subject is ever able to say anything meaningful about the objective world. The second, closely related problem is that the sociology of scientific knowledge cannot then respond to the recurrent charge that it holds to an unsustainable relationship between the social and natural orders.

Kochan proposes that Martin Heidegger’s existential phenomenology provides the wherewithal to answer these two problems. This, he suggests, is to the benefit of science and technology studies, the wider, interdisciplinary field of study, which the sociology of scientific knowledge could justifiably be said to have inaugurated but has also grown increasingly detached from the latter. Incidentally, while Kochan himself refers to this wider field as “science studies”, “science and technology studies” seems preferable because it not only enjoys greater currency, but also conveys more accurately the focus on practices and materiality from which stems the divergence between the enterprises Kochan seeks to distinguish.

Anyway, as becomes evident in the course of reading Science as Social Existence, Kochan’s proposal calls first for the correction of Joseph Rouse’s and Bruno Latour’s arguably mistaken reading of Heidegger, particularly in regard to Heidegger’s pivotal distinction between essence and existence, and to Heidegger’s further insistence upon the historicity of Being. This is followed by the obligatory illustration of what is to be gained from such a philosophical excursus.

Kochan thus goes on to revisit what has become a classic of science and technology studies, namely the arguments between Robert Boyle and Thomas Hobbes over the former’s signal invention, the air-pump. Kochan shows here how Heidegger’s thought enables a more symmetric account of the relationship between the social and natural order at issue in the arguments between Boyle and Hobbes, so disarming Latour’s otherwise incisive objection that the sociology of scientific knowledge is a neo-Kantian enterprise that affords matter no agency in the making of the world we inhabit. From this point of view, Science as Social Existence would not only seem to answer important conceptual problems, but also offer a helpful explication and clarification of the notoriously difficult Heideggerian corpus.

It should also be noted, however, that this corpus has actually played a marginal role in the development of science and technology studies and that leading figures in the field have nonetheless occasionally felt compelled to interrogate texts such as Heidegger’s Question Concerning Technology. Such incongruity about the place of Heidegger within the evolution of science and technology studies is perhaps important to understanding Sassower’s caustic line of questioning about what exactly is to be gained from the turn to Heidegger, which Science as Social Existence seeks to advance.

Real Love or a Shotgun Marriage?

Bluntly, Sassower asks why anyone should be interested in marrying Heideggerian existential phenomenology and the sociology of scientific knowledge, ultimately characterising this misbegotten conjunction as a “shotgun marriage’. My immediate answer is that Science as Social Existence offers more than just a detailed and very interesting, if unconventional, examination of the conceptual problems besetting the sociology of scientific knowledge.

As someone schooled in the traditions of history and philosophy of science who has grown increasingly concerned about the importance of history, I particularly welcome the clarification of the role that history plays in our understanding of scientific knowledge and technological practice. Kochan, following Heidegger to the letter, explains how the inquiring subject and the objective world are to be understood as coming into being simultaneously and how the relationship between the two varies in a manner such that what is and what can be said about the nature of that which is are a matter of historical circumstance.

As a result, history weighs upon us not just discursively, but also materially, and so much so that the world we inhabit must be understood as irreducibly historical. As Kochan puts it while contrasting Kant’s and Heidegger’s understanding of finitude:

For Heidegger … the essence of a thing is not something we receive from it, but something it possesses as a result of the socio-historically conditioned metaphysical projection within which it is let be what it is. On Heidegger’s account, not even an infinitely powerful intellect could grasp the intrinsic, independently existing essence of a thing, because no such essence exists. Hence, the finitude of our receptivity is not the issue; the issue is, instead, the finitude of our projectivity. The range of possible conceptualisations of a thing is conditioned by the historical tradition of the subject attempting to make sense of that thing. Only within the finite scope of possibilities enabled by the subject’s tradition can it experience a thing as intelligible, not to mention develop a clearly defined understanding of what it is (258-9).

Literally, tradition matters. Relatedly, I also welcome how Science as Social Existence helps me to clarify the ambiguities of Heidegger’s comportment toward scientific inquiry, which would have been very useful some time ago, as I tried to forge a bridge between the history of biology and a different set of philosophers to those usually considered within the history and philosophy of science, not just Heidegger, but also Michel Foucault and Gilles Deleuze.

As I sought to reflect upon the wider implications of Heidegger’s engagement with the biological sciences of his day, Science as Social Existence would have enabled me to fend off the charge that I misunderstood Heidegger’s distinction between ontic and ontological orders, between the existence of something and the meaning attributed to it. Thus, Kochan points out that:

Metaphysical knowledge is, according to Heidegger, a direct consequence of our finitude, our inescapable mortality, rather than of our presumed ability to transcend that finitude, to reach, infinitely, for heaven. Because the finitude of our constructive power makes impossible a transcendent grasp of the thing in-itself — leaving us to be only affected by it in its brute, independent existence — our attention is instead pushed away from the thing-in-itself and towards the constructive categories we must employ in order to make sense of it as a thing present-at-hand within-the-world.

For Heidegger, metaphysics is nothing other than the study of these categories and their relations to one another. Orthodox metaphysics, in contrast, treats these existential categories as ontic, that is, as extant mental things referring to the intrinsic properties of the things we seek to know, rather than as ontological, that is, as the existential structures of being-in-the-world which enable us to know those things (133-4).

The clarification would have helped me to articulate how the ontic and ontological orders are so inextricably related to one another and, today, so entangled with scientific knowledge and technological practice that Heidegger’s reading of Eugen Korschelt’s lectures on ageing and death matters to our understanding of the fissures within Heidegger’s argument. All this seems to me a wholly satisfactory answer to Sassower’s question about the legitimacy of the conjunction Kochan proposes. This said, Heidegger and sociology are not obvious companions and I remain unpersuaded by what Science as Social Existence might have to offer the more sociologically inclined field of science and technology studies. This, I think, is where the cracks within the edifice that is Science as Social Existence begin to show.

An Incompleteness

There is something unsettling about Science as Social Existence and the distinctions it draws between the sociology of scientific knowledge and the wider field of science and technology studies. For one thing, Science as Social Existence offers an impoverished reading of science and technology studies whereby the field’s contribution to the understanding the production of scientific knowledge and related technological practices is equated with Latour’s criticism of the sociology of scientific knowledge, as the latter was articulated in arguments with David Bloor nearly two decades ago.

Science as Social Existence is not nearly as interested in the complexity of the arguments shaping this wider field as it is in the heterogeneity of philosophical positions taken within the sociology of scientific knowledge with respect to the relationship between knowledge and the material world. It bears repeating at this point that Kochan defines the latter enterprise in the narrowest terms, which also seem far more attuned to philosophical, than sociological considerations. Such narrowness should perhaps come as no surprise given the importance that the sociology of scientific knowledge has attached to the correspondence theory of truth, but there also is much more to the history of philosophy than just the Cartesian and Kantian confrontations with Plato and Aristotle, which Heidegger privileges and Kochan revisits to answer the questions Rouse and Latour have asked of the sociology of scientific knowledge.

Sassower’s possibly accidental reference to a “Spinozist approach” is a useful reminder of both alternative philosophical traditions with respect to materiality, relationality and cognitive construction, and how a properly sociological inquiry into the production of scientific knowledge and technological practices might call for greater openness to the heterogeneity of contemporary social theory. This might even include actor-network theory and its own distinctive reformulation of Spinozist monadology. However, Science as Social Existence is not about any of this, and, as Kochan’s response to Sassower reminds us, we need to respond to its argument on its own terms. Let me then say something about Kochan’s configuration of phenomenology and sociological thought, which is just as unsettling as the relationship Kochan posits between the sociology of scientific knowledge and the wider field of science and technology studies.

Ethnomethodology is the most obvious inheritor to the phenomenological tradition which Kochan invokes to address the problems confronting the sociology of scientific knowledge, and it has also played a very important role in the evolution of science and technology studies. Key ethnomethodological interventions are ambivalent about Heideggerian constructions of phenomenology, but Kochan does not appear to have any great interest in either this sociological tradition or, relatedly, what might be the implications of Heidegger’s divergence from Edmund Husserl’s understanding of the phenomenological project for the relationship between subjects and knowledge.

Instead, Kochan prefers to weld together existential phenomenology and interactionist social theory, because, as he puts it, “interactionist social theory puts the individual subject at the methodological centre of explanations of social, and thus also of cognitive, order” (372). This, however, raises troubling questions about Kochan’s reading and mobilisation of Heidegger. Kochan equates the subject and Being, but Heidegger himself felt the need to develop the term beyond its more conventional connotations of “existence” as he came to understand the subject and Being as closely related, but not one and the same. As Kochan himself notes Being “is not a thing, substance, or object” (39). This form of existence is to be understood instead as a performative operation, if not a becoming.

Furthermore, Kochan would seem to underestimate the importance of Heidegger’s understanding of the relationship between social existence and the fullest realisation of this form of existence. While Heidegger undoubtedly regards Being as emerging from within the fabric of intersubjective relations, Heidegger also maintains that authentic Being realises itself by extricating itself from other beings and so confronting the full meaning of its finitude. As a result, one is compelled to ask what exactly is Kochan’s understanding of the subject and its subjectivity, particularly in relation to the location of “knowledge”.

Possible Predecessors Gone Unacknowledged

Strikingly, these are the kinds of questions that Foucault asks about phenomenology, an enterprise which he regards as contributing to the consolidation of the modern subject. Yet, Kochan would appear to dismiss Foucault’s work, even though Foucault has much to say about not just the historicity of the subject, but also about its entanglement with mathēsis, a concept central to Kochan’s analysis of the encounter between Boyle and Hobbes. Despite the richness and symmetry of the account Kochan offers, it seems quite unsatisfactory to simply observe in a footnote that “Heidegger’s usage of mathēsis differs from that of Michel Foucault, who defines it as ‘the science of calculable order’” (234 n20).

Put simply, there is something amiss about all the slippage around questions of subjectivity, as well as the relationship between the historical and ontological ordering of the world, which calls into question the sociological foundations of the account of the sociology of scientific knowledge which Science as Social Existence seeks to articulate.

Clearly, Kochan mistrusts sociological critiques of the subject, and one of the reasons Kochan provides for the aversion is articulated most pithily in the following passage from his response to Sassower, in relation to the sociological perspectives that have increasingly come to dominate science and technology studies. Kochan writes:

What interests these critics … are fields of practice. Within these fields, the subject is constituted. But the fundamental unit of analysis is the field – or system – not the subject. Subjectivity is, on this theory, a derivative phenomenon, at best, a secondary resource for sociological analysis. From my perspective, because subjectivity is fundamental to human existence, it cannot be eliminated in this way.

In other words, if the subject is constructed, then its subjectivity and structures of feeling can provide no insight into our present condition. This, however, is a very familiar conundrum, one that, in another guise, has long confronted science and technology studies: That something is constructed does not necessarily amount to its “elimination”. The dividing issue at the heart of Science as Social Existence would then seem to be less the relationship between scientific knowledge and the material constitution of the world about us, and more whether one is interested in the clarity of transcendental analytics or charting the topological complexities of immanent transformation.

My preference, however, is to place such weighty and probably irresolvable issues in suspension. It seems to me that it might be more productive to reconsider instead how the subject is constituted and wherein lie its distinctive capacities to determine what is and what can be done, here and now. Anthropological perspectives on the questions science and technology studies seek to pose today suggest that this might be how to build most productively upon the Heideggerian understanding of the subject and the objective world as coming into being simultaneously.

Perhaps, however, I am just another of those readers destined to be “unhappy” about Science as Social Existence, but I am not sure that this is quite right because I hope to have conveyed how much I enjoyed thinking about the questions Science as Social Existence poses, and I would just like to hear more about what Kochan thinks of such alternative approaches to reading Heidegger today.

Contact details: p.palladino@lancaster.ac.uk

References

Kochan, Jeff. Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge. Cambridge: Open Book Publishers, 2017.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Silvia Tossut, Vita-Salute San Raffaele University, silvia.tossut@gmail.com

Tossut, Silvia. “Which Groups Have Scientific Knowledge? A Reply to Chris Dragos.” Social Epistemology Review and Reply Collective 5, no. 7 (2016): 18-21.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-35E

Please refer to:

lab

Image credit: University of Michigan School of Natural Resources & Environment, via flickr

In a recent paper in Social Epistemology, Chris Dragos (2016) tackles the question of groups having scientific knowledge, arguing for the failure of Kristina Rolin’s argument that the general scientific community can know. Although I find Dragos’ paper to be a valuable reflection on an important theme, I also have some remarks concerning his argument. I sincerely hope a fruitful discussion will follow this short reply.  Continue Reading…