Archives For expert/lay interaction

Author Information: Michel Croce, University of Edinburgh, michel.croce@ed.ac.uk.

Croce, Michel. “Objective Expertise and Functionalist Constraints.” Social Epistemology Review and Reply Collective 8, no. 5 (2019): 25-35.

The pdf of the article gives specific page references. This essay is published in two separate posts, the first of which is available at this link. Shortlink: https://wp.me/p1Bfg0-49a

Pictured here, an artist’s rendering of Christian Quast and his uncle who can do plumbing. Some liberties in representation have been taken.
Image by Natalie Bowers via Flickr / Creative Commons

 

. . . . In a word, Quast cannot have his cake and eat it too.

On the Fundamental Ingredients of Expertise

The balanced account suffers from a second problem pertaining to the aforementioned ingredients of expertise, namely primary competence, secondary competence, and intellectual virtues (see also Hardwig 1994, 92). According to ExpertF-C-M, expertise requires an undefeated disposition to fulfill a particular service function adequately at the moment of assessment.

In turn, all three ingredients feature in the undefeated-disposition requirement, in that lacking any of them defeats the attribution of expertise. In this section, I demonstrate that this account is too strong, as it poses unduly restrictive requirements for one to be an expert. In particular, I worry about secondary competence and what Quast calls “intellectually virtuous character” as necessary components of expertise.

Let us consider secondary, or explanatory, competence first: the ability to give an account of one’s performances. We have already seen that on the balanced account, failing to display secondary competence defeats expertise because, as the wine-consultant case shows, we expect from experts that they can give us explanations regarding their services.

However, the plausibility of this understanding of expertise entirely rests on the specifics of the example Quast introduces. In the proposed case, a subject challenges the wine consultant’s evaluation because of its inconsistency with the subject’s expectations and the testimony of the former owner of the cellar. Notice, though, that the disposition to account for one’s performances—in particular, to laypeople—requires an entirely different set of abilities than the ones necessary to fulfill one’s service function successfully. The former set includes such intellectual virtues as a sensitivity to a layperson’s epistemic resources, communicative clarity, intellectual generosity, and possibly other abilities.[1]

A civil engineer could well possess primary competence in demolishing or rebuilding a bridge and the ability to discuss it with other experts yet lack the competence to provide effective explanations of their techniques, strategies, and related risks to a lay audience. Something similar happens in sports. A lot of amazing athletes can do extremely complicated things that are generally out of reach for most human beings, yet they may not be able properly to account for what they do.

They can show you these actions hundreds of times, but if you ask them to tell you how they do that, you might feel extremely disappointed or confused by their explanations. This is why not all the greatest sport heroes are good coaches and not all the best civil engineers can effectively account for what they do to a lay audience. For as I will stress in the final section, primary competence and secondary competence are, in a sense, different kinds of expertise.

A Private or People’s Expertise

The required combination of these competences for a proper understanding of expertise on Quast’s view is somewhat surprising if we bear in mind that he wants to confer objective expertise to “private experts,” who can offer us quite specific services such as fixing some leaky drain pipes. In Quast’s private-expert case, it seems odd to require that Christian’s father-in-law be able to give an account of how he is going to fix the pipe in order to fulfill the function of a private expert.

For, on the one hand, the relevant contrast class includes two individuals, namely Christian and his wife, who—as we are told—are both inexperienced in these kinds of handicraft matters. On the other, Christian’s father-in-law might even lack the necessary abilities properly to explain how he will repair the pipes.

These considerations make it hard to see why giving an appropriate account of the provided service should be necessary for Christian’s father-in-law to be an expert on a functionalist view that aims at being in a position to grant private experts objective expertise.

Consider now the other ingredient of expertise on Quast’s view, namely one’s intellectual character in the sense of their willingness to manifest primary and secondary competence when appropriate. The above considerations about the intellectual abilities required for one to deliver proper explanations of one’s service should provide sufficient reason to consider possession of an intellectually virtuous character as a relevant component of the competences required for one to be an expert rather than as mere willingness to manifest such competences.[2] Thus, in the remainder of this critical notice I shall simply tackle the willingness component and, in particular, the willingness to manifest secondary competence when appropriate.

Suppose a physicist, call him Ivory Tower, is completely reluctant to share anything related to his work with people, especially laypersons. Ivory’s social interactions are limited to what’s required for him to keep his position at his institution. Ivory works in optics, and, in particular, he is developing reliable ways to see through walls by using special cameras. More specifically, he is working on a project that would allow rescue teams to individuate people when the terrain is dangerous and would allow cars to avoid accidents by identifying obstacles or vehicles from around the corner.

Quast’s view commits us to conclude that Ivory lacks expertise in optics or whatever more specific subfield he is working in because he fails to display the required willingness to give an account of his performances when appropriate. This verdict is unsatisfying in general, as it strikes us as evident that Ivory’s extremely sophisticated work in optics should suffice to grant him the status of an expert.

Furthermore, the verdict is unsatisfying even from the perspective of a functionalist account of expertise, as Quast’s purports to be. For despite lacking willingness to explain his work to others, Ivory is surely serving laypeople’s needs. He does so by attempting to solve problems in optics and providing the community with new resources rather than by making himself accountable for his work to a lay audience, but this merely amounts to another relevant way an expert can serve their community, as I will argue in the next section. Thus, since there seem to be no good reasons to deny Ivory the expertise he has acquired through years of intense work, we can conclude that the willingness to manifest secondary competence is not a necessary condition for one to possess expertise.

Two allegedly key ingredients in Quast’s account of expertise, namely secondary competence and the willingness to manifest that competence when appropriate, are less fundamental than one might have initially thought. In fact, they should not be considered necessary requirements for one to be an expert in some domain. In the final section, I shall explore some implications of the considerations offered so far, with the aim of contributing to reaching a better understanding of the notion of, and the role of, an expert in the context of the society we currently live in.

Expertise Today: Toward an Objective Approach

Many reputable scholars characterize the age we live in as a post-truth era (Fuller 2018) in which the very idea of expertise is dead (Nichols 2017), as it has been replaced by a free market in information and self-attributed competences that takes place in the blogosphere (Coady 2012), the internet (Lynch 2016), and more recently social media, where fake news easily proliferates (Vosoughi et al. 2018). As Nichols thoroughly describes (see §7), we’re surrounded by a gigantic amount of news and by experts who are more and more specialized in any domain, and yet we know less than before and distrust expertise.

If there is one thing epistemologists can surely do—in fact, must do—to counteract the advance of post-truth thinking in our society, it is attempting to reach a better understanding of the notion of expertise. Such a service would not solve all the problems, yet it would at least contribute to indicating where genuine competence lies and who has it and therefore to marking a neater distinction between experts and charlatans. This is why I am largely sympathetic to Quast’s efforts, as it is clear that we need experts now more than ever.

It is for the same reasons, though, that I believe Quast’s balanced account of expertise is on the wrong track. In this final section, I make two points to suggest how we should redirect our search for a better account of expertise. First, I explain why we need a more objective account of expertise. Second, I suggest an alternative way to look at the service experts are supposed to fulfill in our communities.

The first consideration is called for by the peculiar situation we’re currently in. As I showed in §1, the functionalist spirit of the balanced account of expertise ends up undermining the very notion of objective expertise that Goldman has in mind when he argues that “being an expert is not simply a matter of veritistic superiority to most of the community. Some non-comparative threshold of veritistic attainment must be reached” (2001, 91).

Since Goldman admits that it might be difficult to determine where the bar has to be set, one might suspect the balanced account has a clear advantage over a purely objective approach to expertise, as on Quast’s view being suitably disposed and willing to serve the need of a relevant contrast class is all it takes for one to achieve the status of an expert.

This is a mistake though because it is far from obvious that a novice or group of novices can reliably ascribe expertise to someone who is supposed to be more competent than they are in a domain. In other words, the more context sensitive and subject sensitive is the process of expertise attribution, the higher is the risk of misplacing trust in non-experts. This is an unwelcome consequence of the balanced account—a consequence that makes the account lose its alleged positional advantage over objective approaches to expertise.

Against the Balanced Account

My proposed epistemic consideration against the balanced account of expertise can be supported by a further reason for favoring an objective account of expertise—namely, the fact that this latter account provides a community with robust criteria for assessing who is to be trusted to deliver a service in any field. This translates into a practical advantage for the entire community, which can create ways to signal who and where experts are[3] and therefore help lay members navigate the current ocean of self-attributed competences and epistemic egalitarian ideals. Needless to say, this consideration does not suffice as a remedy against the detrimental effects of post-truth thinking; yet it should at least offer motivation for directing our efforts toward an objective approach to expertise rather than a “balanced” one.

The second consideration brings the distinction between primary and secondary competence back on stage. In a realist, or objective, approach, the attribution of expertise cannot depend on the specific function one is required to fulfill relative to some contrast class in a particular context. Some handy craftsperson who has learned how to repair the very same leaky drain pipe at one’s home over the years does not count as an expert, because their competence is too limited and unreliable in similar situations in which a proficient plumber is expected to succeed.

Yet, an objective account is in a position to distinguish at least two broad kinds of expertise, namely the expertise of those who can reliably provide some sort of service in a domain and those who can explain what’s going on in a domain to others, especially laypeople. Call the former domain-oriented expertise and the latter novice-oriented expertise.

The set of domain-oriented experts includes reliable plumbers, scuba divers, wine tasters, lawyers, doctors, musicians, and scholars, among others. Their expertise consists of an ability to serve the needs of a community in their respective domains—that is, what Quast calls primary competence. In particular, the function of domain-oriented experts encompasses two main roles:

(i) that of expert practitioners, who address specific needs of the community members—for example, repairing leaky drain pipes, maintaining or restoring health, and performing jazz music; and

(ii) that of expert innovators, whose job is to improve the community’s capacity to serve the needs of their members by developing new resources, advancing the techniques, or carrying out groundbreaking research in a domain—for example, creating more-robust drain pipes, developing new therapies against cancer, or composing jazz music.

As should be evident, both functions demand that the subject have intellectual or practical dispositions to reliably deliver the required services. However, these roles are quite different, and not all expert practitioners are also expert innovators, and vice versa. Thus, any individual who fulfills either role possesses domain-oriented expertise.

In contrast, the set of novice-oriented experts includes those individuals who have secondary competence, namely the capacity to help laypeople understand the services domain-oriented experts provide to the community. This set typically includes teachers and science popularizers, but all domain-oriented experts who possess a sufficient amount of secondary competence may have novice-oriented expertise too.

However, possessing domain-oriented expertise does not ensure that one also has novice-oriented expertise, as the wine-consultant and civil-engineer cases discussed in §2 demonstrate. For this service activates a different set of dispositions—namely, novice-oriented abilities, which are not strictly necessary for one to possess domain-oriented expertise.

Conclusion

The proposed categorization of the two main services experts fulfill in a community allows us to take into due consideration the functionalist element of expertise without giving up on an objective perspective that grants conceptual primacy to the dispositional component of expertise. We all wish to be surrounded by subjects who can offer clear explanations of how they are going to satisfy our needs, but we’d better also have an account that explains why some experts greatly serve the domain-oriented needs of our community without being able to serve the novice-oriented ones.

This is not only important for us to improve the explanatory power of our definition of expertise, but also for a community to evaluate how to deploy its resources to ensure that both kinds of experts are in a suitable position to fulfill their respective service function.

This reply to Quast’s insightful paper aimed at shedding light on some limits of his account and sketching a strategy to accept Quast’s suggestions about the necessary balance between a dispositional dimension and a functionalist dimension of expertise within an objective approach. Far from offering a comprehensive alternative account, I hope this reply can encourage others to address the important issues Quast has raised in his paper and can contribute to improving our understanding of the notion of expertise.

Contact details: michel.croce@ed.ac.uk

References

Coady, David. 2012. What to Believe Now: Applying Epistemology to Contemporary Issues. Malden (MA): Wiley-Blackwell.

Croce, Michel. 2019. “On What It Takes to Be An Expert.” The Philosophical Quarterly 69(274): 1-21.

Fuller, Steve. Post-Truth: Knowledge as A Power Game. London: Anthem Press.

Goldman, Alvin. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63: 85-110.

Lynch, Michael. 2016. The Internet of Us: Knowing More and Understanding Less in The Age of Big Data. New York: Liveright Publishing Corporation.

Nichols, Tom. 2017. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. New York: Oxford University Press.

Quast, Christian. 2018. “Towards A Balanced Account of Expertise.” Social Epistemology 32(6): 397-419.

Vosoughi, Soroush; Deb Roy, and Sinan Aral. (2018). “The Spread of True and False News Online.” Science 359 (6380): 1146-1151.

[1] These virtues are part of what I have elsewhere called novice-oriented abilities (see Croce 2019, 13).

[2] For the sake of completeness, it should also be noted that other intellectual virtues may be required for one to possess primary competence in a domain, especially in those fields in which competence involves some propositional knowledge and understanding. In particular, I have in mind virtues such as thoroughness, intellectual perseverance, creativity, open-mindedness, intellectual curiosity, and autonomy (see Croce 2019, 18).

[3] As Goldman points out, this is the role of academic certifications, professional accreditations, work experiences, and so on (2001, 97).

Author Information: Michel Croce, University of Edinburgh, michel.croce@ed.ac.uk.

Croce, Michel. “Objective Expertise and Functionalist Constraints.” Social Epistemology Review and Reply Collective 8, no. 5 (2019): 25-35.

The pdf of the article gives specific page references. This essay is published in two separate posts, the second of which is available at this link. Shortlink: https://wp.me/p1Bfg0-496

Image by Bill Kerr via Flickr / Creative Commons

 

Christian Quast has recently embarked on the project of systematizing the debate about the notion of expertise, an extremely fascinating and important issue addressed by scholars of many disciplines yet still in need of an interdisciplinary take. He sheds light on a number of relevant features of this notion and defends what he calls a “balanced” account of expertise, namely one that defines this concept in light of an expert’s dispositions, manifestations of their dispositions, and social role or function.

In doing so, Quast argues against three versions of reductionism about expertise: ReductionismF, which reduces expertise to the function an expert fulfills in a community; ReductionismM, which confuses expertise with the manifestation of an expert’s competence; and ReductionismD, in which expertise boils down to possessing suitable dispositions in a specific domain—that is, practical abilities or epistemic properties such as knowledge, true beliefs, or understanding.

As an attempt at bringing together interdisciplinary discussions of a specific topic, Quast’s project is ambitious and provides a genuine contribution to the ongoing discussions around the topic of expertise in philosophy, psychology, and the social sciences. Inevitably, Quast’s rich analysis and original proposal raise a number of worries that deserve to be further inspected.

In this critical reply, I offer some considerations that put pressure on Quast’s balanced account and hopefully help anyone interested in this debate take a step forward toward explaining what it takes for one to be an expert. The reply is structured as follows. First, I argue that his allegedly balanced view is liable to a potentially compromising tension between its function component and the ingredients of objective expertise (§1).

Then, I show that Quast’s threefold characterization of an objective expert is too strong, as it imposes conditions that several individuals whom we would consider experts are unable to fulfill (§2). Finally, I provide reasons in favor of endorsing an objective account of expertise in light of some specific features of our society, and show how this account can take into due consideration the different services experts ordinarily perform (§3).

Against a Balanced Account of Expertise

The first consideration I want to offer in response to Quast is that, to put it simply, he cannot have his cake and eat it too. Quast devotes a good amount of his paper to convincing us that the aforementioned reductionist accounts of expertise are flawed and that a more plausible story of what it takes for one to be an expert has to rely upon “an entangled interrelationship” between an expert’s dispositions and the contextual service function they perform in a community (2019, 412). In this section, I purport to show that such an entangled relationship of dispositions and functions on his balanced approach is largely problematic.

Let us recall Quast’s comprehensive definition of an expert, which is offered right at the end of his article:

(ExpertF-C-M) Someone e is an objective expert in contrast to some client c within a certain domain d only if e is undefeatedly disposed to fulfill a particular service function in d for c adequately at the moment of assessment (412).

At first glance, Quast’s move is attractive. In the end, we usually think of experts as subjects who are more competent than most people in a domain,[1] but, at the same time, we grant one the status of an expert (i) based on their social role and (ii) against a relevant contrast class of individuals who are unable to provide a similar service. In contrast, both ReductionismF and ReductionismD are liable to counterexamples.

The former is wrongly committed to granting the status of an expert translator to a subject who manages a translation-services company by delegating any job to unknown freelancers and lacks any translating skills (402). The latter is wrongly committed to grant the status of a wine expert to an individual who can correctly estimate the value of a wine cellar without having the ability or the willingness to provide an explanation of their evaluation (407).

In contrast, neither the manager nor the wine consultant satisfies the requirements of expertise on the balanced account. The former is not an expert, because he lacks the dispositions required to provide translating services—that is, knowledge of at least two languages, translating skills, and the like. The latter is not an expert, because her competence to assess the value of wine cellars gets defeated by her inability or unwillingness to give an account of her services at the moment of assessment (407).[2]

Dispositions and Functions in Tension

However, a closer inspection of Quast’s proposed view of expertise reveals a tension between the disposition component and the function component. Consider the disposition component first and, in particular, his analysis of objective expertise.

He conceives of objective expertise as encompassing the following three elements: (i) primary competence, which relates to an expert’s reliability in delivering the services they are supposed to provide; (ii) secondary competence, which relates to an expert’s ability to explain their services to a client, thereby establishing and fostering mutual trust; and (iii) intellectually virtuous character, which ensures that an expert is willing to manifest both the above competences when appropriate.

For the time being, let’s set aside a reasonable concern one might have about Quast’s unduly narrow characterization of the role intellectual-character virtues play in his account of objective expertise.[3]

The balanced account is quite demanding, as according to it someone is an objective expert insofar as they are competent in a given domain, able to provide their clients with tailored explanations of their services, and willing to do so in the appropriate circumstances. Going back to the wine-consultant case, it should be evident that the reason why the consultant might fail to be an expert is that she lacks secondary competence, intellectual virtues, or both, as her inability or unwillingness to share any considerations about her estimate of the wine cellar with the client demonstrates.

As anticipated, on the balanced account these considerations about objective expertise need to be balanced, or implemented, with further remarks on the service function of experts. Here Quast takes quite a concessive route and offers the case of a “private expert”: in the example, Christian Quast’s wife asks him to find someone who can fix or replace a leaky drain pipe; he approaches the issue by relying on his father-in-law, whose craft hobby enables him to solve the problem (410).

Quast is ready to admit that his father-in-law is more of an expert than himself and his wife, yet he goes so far as to concede that the man satisfies the requirements of a function-based account of expertise.

The function component plays a key role in this account, in that the service his father-in-law fulfills determines

(i) a relevant contrast class of individuals who lack the disposition to perform a specific function—that is, the class composed of Christian and his wife;

(ii) a proper characterization of the domain of expertise, namely that of replacing leaky drain pipes;

(iii) the degree of reliability required for Christian’s father-in-law to fulfill the function—that is, Christian’s own standards for replacement of leaky drain pipes;

(iv) a range of similar situations in which the man is supposed to be able to deliver his services; and

(v) minimum conditions for him to fulfill the individual requirements of objective expertise, which in this case require relative competence to repair the leaky drain pipe at the Quasts’ place.

Thus, on Quast’s balanced account, possession of expertise depends on contextual factors, such as the specifics of the contrast class of laypeople and the situation in which expertise is ascribed, as well as on practical factors, such as the needs of the relevant clients and the urgency of the required service. These elements determine whether a hobbyist-craftsperson is an expert in repairing leaky drain pipes or a wine consultant is an expert in value assessment of wine cellars.

Problems of Balance in Expertise

Unfortunately, the “balanced” account emerging from these components is less tenable than one might have initially thought. The first problem is that it is hard to make sense of the notion of objective expertise on such a functionalist account. For possession of objective expertise in a domain becomes hostage to two inherently relative elements, namely (i) the service someone is disposed and willing to fulfill for (ii) a community—or contrast class, to stick with Quast’s vocabulary.

On standard comparative accounts of expertise, (ii) obviously plays a major role, as possession of expertise merely amounts to being more of an expert in a (broader or narrower) domain than some group of people and therefore expertise reduces to an entirely comparative notion.

In such a perspective, both Christian’s father-in-law and a plumbing engineer are experts in repairing leaky drain pipes although the latter’s competence is much broader than the former’s. For each of them is more of an expert than the respective contrast class, which includes Christian and his wife in the former case versus, say, most people in the engineer’s town, district, or state in the latter case. Clearly, though, this diagnosis comes at the cost of giving up on the inquiry into the objective requirements of expertise.

Despite including (ii) in his account of expertise, Quast purports to endorse a view that makes room for objective expertise. Thus, he has to prevent this relative condition from delivering the standard comparative diagnosis in situations such as the leaky-drain-pipes one.

He does so through the service-function element—that is, (i)—by arguing that one is an objective expert insofar as they are undefeatedly disposed to serve a relevant need of the respective community or contrast class. Thus, on the balanced account we can still attribute objective expertise to both Christian’s father-in-law and a plumbing engineer as long as they can fix leaky drain pipes in the respective community or contrast class.

I am unpersuaded by this move for two reasons. The first is that introducing a relative element such as (i) does not neutralize the anti-objective effect of (ii); rather, it is likely to intensify such an effect by adding a further relative variable to the account. The second is that the only way for Quast to grant expertise to his father-in-law and a plumbing engineer is to impose odd restrictions on domains of expertise.

Specifically, he has to concede that his father-in-law is an expert because he serves the community composed of Christian and his wife by doing something like “repairing leaky drain pipes at the Quasts’ place” or “repairing leaky drain pipes of some kind.” In contrast, the plumbing engineer is an expert because he serves a wider community by, say, “repairing leaky drain pipes of any kind.”

This move would thus generate an unnecessary proliferation of domains of expertise depending on the specific needs of any relevant contrast class. For example, my auntie Renata, who helps most inhabitants of a rural village in Liguria react to (i.e., “like”) and comment on the content appearing in their Facebook news feed, would possess objective expertise in something like “adding likes and comments on posts on Facebook” relative to the contrast class composed of the citizens of Bevena, although her competence regarding social networks ends pretty much there.

These considerations show that the balanced account narrows the notion of expertise to the point that we lose our grip on what is objective about an expert’s competence. To avoid this result and save both the functionalist spirit of his view and its context sensitivity, Quast should abandon the idea of making room for objective expertise and endorse an entirely comparative account. This is why, in a word, Quast cannot have his cake and eat it too.

Contact details: michel.croce@ed.ac.uk

References

Coady, David. 2012. What to Believe Now: Applying Epistemology to Contemporary Issues. Malden (MA): Wiley-Blackwell.

Croce, Michel. 2019. “On What It Takes to Be An Expert.” The Philosophical Quarterly 69(274): 1-21.

Fuller, Steve. Post-Truth: Knowledge as A Power Game. London: Anthem Press.

Goldman, Alvin. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63: 85-110.

Lynch, Michael. 2016. The Internet of Us: Knowing More and Understanding Less in The Age of Big Data. New York: Liveright Publishing Corporation.

Nichols, Tom. 2017. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. New York: Oxford University Press.

Quast, Christian. 2018. “Towards A Balanced Account of Expertise.” Social Epistemology 32(6): 397-419.

Vosoughi, Soroush; Deb Roy, and Sinan Aral. (2018). “The Spread of True and False News Online.” Science 359 (6380): 1146-1151.

[1] It may be helpful to note that this competence may boil down to different properties and dispositions depending on the specifics of the domain under consideration. For instance, the competence of an expert carpenter might involve a good deal of experience, practical skills, and know-how, whereas the competence of an expert in contemporary history might be mostly based on great instruction, analytical skills, and theoretical understanding of the extant literature and recent historical events.

[2] In the analysis of his wine-expert case, Quast points out that we might ascribe a default expertise to the wine consultant yet withdraw our attribution of expertise if she refuses to provide suitable explanations of her evaluation (407–8).

[3] As I have argued elsewhere (see Croce 2019, §§4–5), we have reasons to think the character virtues of an expert make them not only willing but also able to fulfill their service function within a community.

Author Information: Stephen Turner, University of South Florida, turner@usf.edu.

Turner, Stephen. “Circles or Regresses? The Problem of Genuine Expertise.” Social Epistemology Review and Reply Collective 8, no. 4 (2019): 24-27.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-48a

Image by Revise_D via Flickr / Creative Commons

 

This article responds to Jamie Carlin Watson (2019) “What Experts Could Not Be.” Social Epistemology 33(1): 74-87. DOI: 10.1080/02691728.2018.1551437

Jamie Carlin Watson’s article raises some crucial questions about expertise, and about its relation to truth and competence, questions on which discussions of expertise have usually foundered, or at least run up against and tried to avoid. One can summarize the problem as the question of whether expertise, or a given claim to expertise, is genuine or valid.

The problem, as Watson shows, is tougher than it appears. The easiest way out is to epistemologize it, by linking expertise to true beliefs. This off-loads the problem of expertise into a problem of truth, which presumably is easier to resolve. The problem with this approach is that expertise does not in fact, and cannot in principle, work in this way. When we rely on experts, it is because we don’t know for ourselves what is true. Nor can we impose tests of reliability on them, at least not easily or directly.

Determining whether they possess a set of true (or at least credible) beliefs would require us to possess the relevant true beliefs ourselves. It would require also meta-knowledge about the content of their beliefs—not merely sharing them, but having knowledge of their truth. Judging something to be true, in expertise contexts, is a matter requiring expertise.

Indeed, this is almost the definition of expertise: we can “understand” what the expert is telling us, but what makes for genuine expertise is the ability to make epistemic judgments about the truth of what the expert says, without relying on their status—their reputation, as experts. The model of testimony doesn’t help here. Assessing their reliability as testifiers would require even more knowledge, knowledge of their past testimony, knowledge of what standard of reliability to apply, for example, on the analogy of eye-witnessing, knowledge of how good eye-witnessing in general is.

Can a History of Performance Justify Expertise?

On the surface, it looks like it would be simpler to just assess expert performances. Did the surgeon’s patients live or die? Did the football coach win or lose? But this runs into the same regress problems. Who is able to judge such things? Did the surgeon take on difficult cases, and have a lower success rate than the surgeon who took only easy cases. This is a real-world issue that figures in actual health regulation discussions, not merely an academic hypothetical.

And the same goes for coaches. Did they exceed expectations or fall below them, given the team they were coaching and its talent? This kind of judgment seems to require a great deal of meta-expertise. And one can ask where the expectations came from? So this expertise is subject to the same sorts of regress problems.

And there is yet another problem with these judgments—circularity or uninformativeness. I can illustrate this by a response my own mother—a physician in a surgical specialty—once gave me to my question “how can I tell if a surgeon is any good?” Her answer—“you need to look at their technique.”

Of course, the prospective patient never has an opportunity to do this, but in any case would have no idea what a good surgical technique looked like, even if they could look. So this is completely uninformative. But it is also circular. One never gets out of the circle of expertise in this case, and this is characteristic: evaluation of expert judgment, even if it is formalized peer judgment, is more expert judgment.

No Reputation Need Be Genuine

The reputational theory of expertise, if we can call it that, does not rely on truth, at least the truth of the expert’s beliefs. It says instead that to be an expert is to be reputed to be an expert. Expert authority is analogous to political legitimacy in the sociological rather than normative sense; this kind of legitimacy, if it produces obedience, is “real.”

The analogous view of expertise, similarly, ignores the normative question of whether expertise is real in the sense of being valid. This kind of assessment does not rely on expert judgment. It needs only the ordinary judgment of people who need only to have in their possession ordinary facts about reputation.

This seems pretty empty. Can’t people have fake reputations, based on erroneous beliefs about their competence or honesty? But there is more to it. The paper explicitly says it is avoiding a discussion of reputational views of expertise, and rejects them, but it seems to me that this rejection is subject to the same kind of argument the paper makes with respect to performance: it is caused.

One might ask what causes reputation—it is not something separate from either performance or credible beliefs. Indeed, how do you get reputation without performance, in some sense? What is the reputation for? How does one get it? One might say that the “reputational theory” is neutral between means of acquiring a reputation—it could be performance, recognition of the possession of true beliefs, or both, with the caveat that “true” is audience relative. And this seems to mean that reputation doesn’t answer the question of genuineness. But to get a reputation you need to do something real, and that also seems to be the point of the argument against the separation of belief and performance.

This does help. One need not be an expert to raise and judge the answers to ordinary questions about how someone got their reputation. One can be wrong, of course. But there is a plethora of ordinary fact available to the person who wants to know, for example, how a surgeon got their reputation or came to be accredited with their expertise.

Relying on this kind of fact, even if it is fallible, avoids the problem of the circularity of basing assessments of expertise on other assessments of expertise. It can include such assessments, for example, evidence of peer judgment by other experts. But it looks on this kind of evidence not as an expert by as a consumer of the processes that generate the judgment, and asks whether they are fair, or produce good results for other consumers.

From this point of view, expertise is an agency problem—a problem of asymmetric information (though the term “information” makes it seem as though information for the expert is the same thing as information for the non-expert, which misses the point of expertise)—which the producer of expertise has a large role in resolving.

It can’t be resolved directly, by the reiteration of expert claims. There truth is the issue, and the point is that the consumer as non-expert can’t assess them. This is characteristic of a large class of relationships, where the issue is resolved in different ways (cf. Turner 1990). So the expert needs to establish credibility indirectly, through such things as processes of certification, which do not take expertise to at least get a sense of the value of.

I’ve argued elsewhere that these processes are central to science as a whole (Turner 2002). But I also think that they are the only real answer to the question of validity from an external point of view. Direct judgments of truth are the business of the expert. But this should not distract us from the fact that expertise is a relation between experts and consumers of expertise. Experts are not just knowers. They are people making claims within a social relationship.

The Deeper Problems of Expertise

This key feature of expertise points to a deep problem, which on examination is perhaps not so deep, and primarily a semantic one. There is an overwhelming sense that an expert is someone who possesses something, and that this possession is what marks genuine expertise out from fake expertise, such as merely reputed expertise.

A reputation is a possession, just a possession of the wrong kind, because it fails to guarantee genuineness. And this is what motivates the argument that the existence of expertise does not depend on the existence of non-experts. But there is a difference between having an ability—say that of a four octave coloratura soprano—and justifiable credibility about what the possessor of this ability might say about it. Whether it is actualized or not, expertise is a social relation. The strength of the testimony view of expertise was that it recognized this implicitly.

But “reliability,” the concept it is associated with, doesn’t work because it implies a record of acts or pronouncements on which users rely. So perhaps we need a better word: trustability, or if we loathe linguistic inventions, trustworthiness with respect to epistemic pronouncements. This keeps the idea of possession, and the recognition that it pertains to a social relation, and allows for multiple grounds for trust, and most importantly, grounds that do not depend, circularly, on the relevant expertise.

Contact details: turner@usf.edu

References

Turner, Stephen. 1990.  Forms of Patronage. Pp. 185-211 in Theories of Science in Society, edited by Susan Cozzens and Thomas F. Gieryn. Bloomington: Indiana University Press.

Turner, Stephen. 2002. Scientists as Agents. Pp. 362-384 in Science Bought and Sold, edited by Philip Mirowski and Miriam Sent. Chicago: University of Chicago Press.

Jamie Carlin Watson (2019) “What Experts Could Not Be.” Social Epistemology 33(1): 74-87. DOI: 10.1080/02691728.2018.1551437

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).

Author Information: Jennifer Jill Fellows, University of British Columbia, jill.fellows@ubc.ca

Fellows, Jennifer Jill. 2013. “Eddies and Currents: A Reply to Sassower.” Social Epistemology Review and Reply Collective 2 (11): 29-37.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-146

Please refer to:

I am grateful for the critical review of my article “Downstream of the Experts: Trust-building and the Case of MPA’s” recently written by Raphael Sassower. His detailed review, as well as the invitation by Social Epistemology to reply to the review, has afforded me the opportunity to carefully reexamine and reiterate some points in my own work that will, I hope, clarify the overall intentions of my argument, and the places where Sassower and I disagree. Sassower and I agree that the challenges facing communication between scientific and lay communities are real, serious and messy, to say the least. And Sassower seems to support my call to amend Grasswick’s argument on the importance of knowledge-sharing in order to stress the need for this knowledge-sharing to be reciprocal. However, Sassower raises seven observations with regards to my arguments. Some of these observations are just that, observations. Some take the form of questions or suggestions. And some are critical of claims made in my paper. At the heart of many of Sassower’s observations is a call for more homogonized democractic communities, and more transparency in access to data.  Sassower seems to suggest that communities are (or should be) homogonized. That is, he argues that everyone in a community has an equal ability to become knowledgeable about the facts on their own. Everyone, then, begins from the same standpoint. He further suggests that, once we recognize communities as homogonized we no longer need to engage in investigations of trust. In effect, he seems to claim that, once everyone has access to the same data, everyone can draw their own conclusions, and no one need trust the expertise of others. I, by contrast, argue that communities are not, at present, homogonized. I argue that power imbalances do exist and that trust cannot be removed from the discussion. Continue Reading…