Archives For expert/lay interaction

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).

Author Information: Jennifer Jill Fellows, University of British Columbia, jill.fellows@ubc.ca

Fellows, Jennifer Jill. 2013. “Eddies and Currents: A Reply to Sassower.” Social Epistemology Review and Reply Collective 2 (11): 29-37.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-146

Please refer to:

I am grateful for the critical review of my article “Downstream of the Experts: Trust-building and the Case of MPA’s” recently written by Raphael Sassower. His detailed review, as well as the invitation by Social Epistemology to reply to the review, has afforded me the opportunity to carefully reexamine and reiterate some points in my own work that will, I hope, clarify the overall intentions of my argument, and the places where Sassower and I disagree. Sassower and I agree that the challenges facing communication between scientific and lay communities are real, serious and messy, to say the least. And Sassower seems to support my call to amend Grasswick’s argument on the importance of knowledge-sharing in order to stress the need for this knowledge-sharing to be reciprocal. However, Sassower raises seven observations with regards to my arguments. Some of these observations are just that, observations. Some take the form of questions or suggestions. And some are critical of claims made in my paper. At the heart of many of Sassower’s observations is a call for more homogonized democractic communities, and more transparency in access to data.  Sassower seems to suggest that communities are (or should be) homogonized. That is, he argues that everyone in a community has an equal ability to become knowledgeable about the facts on their own. Everyone, then, begins from the same standpoint. He further suggests that, once we recognize communities as homogonized we no longer need to engage in investigations of trust. In effect, he seems to claim that, once everyone has access to the same data, everyone can draw their own conclusions, and no one need trust the expertise of others. I, by contrast, argue that communities are not, at present, homogonized. I argue that power imbalances do exist and that trust cannot be removed from the discussion. Continue Reading…