Can Novices be Taught to Choose Trustworthy Experts? Optimism for Reasoning—A Reply to Johnny Brennan, Martin Hinton

In his article “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for (Reserved) Optimism” (2020), Johnny Brennan does two things. He illustrates the problem of the identification of experts, which has caused a great deal of head-scratching for scholars over many years (see Goodwin 2011, section 2), and he suggests a possible way forward to at least improve our ability to choose whom to believe on topics where we cannot be expected to reach independent decisions ourselves … [please read below the rest of the article].

Image credit: NASA’s Marshall Space Flight Center via Flickr / Creative Commons

Article Citation:

Hinton, Martin. 2020. “Can Novices be Taught to Choose Trustworthy Experts? Optimism for Reasoning—A Reply to Johnny Brennan.” Social Epistemology Review and Reply Collective 9 (4): 65-71.

PDF logoThe PDF of the article gives specific page numbers.

This article replies to:

Articles in this dialogue:

In his article “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for (Reserved) Optimism” (2020), Johnny Brennan does two things.[1] He illustrates the problem of the identification of experts, which has caused a great deal of head-scratching for scholars over many years (see Goodwin 2011, section 2), and he suggests a possible way forward to at least improve our ability to choose whom to believe on topics where we cannot be expected to reach independent decisions ourselves.

In the first of these tasks, I find myself in complete agreement with his analysis. His criticism of Elizabeth Anderson’s (2011) criteria and how they are supposed to be operationalized by novices, which provides the bridge to the main point of the paper, is justified and, if anything, a little too restrained. Ultimately, however, the fulfilment of the second task is less satisfying—but then it is an attempt to untie a knot which has resisted disentanglement for centuries—partly because it relies on the somewhat unlikely premise that people are inclined to be self-critical and prepared to shed certain core beliefs upon which their identities are based in the search for truth, with only the promise of being consider a good truth-seeker as their reward; and secondly because it is put forward as an extension of Anderson’s criteria, in spite of the fact that those criteria have been fairly sounded dismissed in the earlier sections.

In spite of those reservations, however, the central notion, that we can be taught how to be better at making unbiased judgements, that we can learn to question our own decisions over whom we trust and why, and that, by extension, the errors that we make in following the wrong lead are errors born of our internal nature and, thus, are within our power to correct, is one which to which I intend to give encouragement and support. It is, after all, a central tenet of the Informal Logic movement, within which I work; and I shall draw parallels between the work of informal logicians and social epistemologists looking at how decisions are made on the basis of expert testimony and how they can be improved. Jean Goodwin has suggested that there has been a ‘convergence between scholarship in Argumentation Studies (AS) and in Studies in Expertise and Experience (SEE) (2011, 285) and I intend to elaborate a little upon that.

The Paradox of the Expert

The paradox of experts runs thus: in order to judge if an expert opinion is good, one must first become an expert oneself, at which point, one no longer needs the opinion of the expert which one was to judge. Brennan suggests that ‘there may be a way to reliably assess experts nonetheless: by making second-order judgments about the experts themselves—that is, their relative competence and their integrity’ (228). This is the approach which has appealed to many scholars across disciplines, and these two aspects, competence and integrity, are clearly at the heart of the matter. The question remains, however, whether the making of such judgements is really a possibility for the true layman. Is it not the case that in order to judge an expert, just as surely as to judge an expert opinion, one also needs to be an expert?

As Brennan points out, Elizabeth Anderson is quite optimistic about our ability to do this, provided, of course, that we want to. Anderson’s criteria: credentials, honesty, epistemic responsibility, and expert consensus, are similar to the aspects brought out in the critical questions (CQs) affixed to Douglas Walton’s (1997) argument scheme for appeals to expert opinion. [2],[3] Walton does not get much involved in the practicalities of how the questions might be answered. His concerns are theoretical, and the main role of critical questions is rather to expose the points of vulnerability in any particular form of reasoning than to provide a guide to evaluating arguments. Anderson’s suggestion that the relevant information can be found quite easily on the internet is difficult to take seriously.

Although Brennan does a good job of exposing the weaknesses in her position, it is worth reiterating what they are and along the way drawing comparisons between her work and Walton’s. The first criterion is the easiest to check. By credentials she is referring to degrees and positions, what Walton covers in his first two CQs: ‘Is E [the expert] credible?’ and ‘Is E an expert in the field that A [the proposition] is in?’

Often, it will be possible to find such information, although it is only relevant to what might be termed ‘academic’ experts[4],  and anyone working in academia will be well aware that the holding of a degrees, undergraduate or graduate, is no guarantee of expertise. It is then suggested that we can search the internet for evidence of dishonesty by our expert. This seems entirely implausible—in the vast majority of cases, no such evidence will exist—only a handful of known fraudsters could be exposed in this way. Where no evidence exists, we are asked to perform an argument from ignorance and conclude that the expert is probably honest on the basis of no evidence to the contrary. As I describe in my (Hinton 2018a) inferences from ignorance are only acceptable where the level of expectation of evidence is high: in this case, the chances of there being evidence of dishonesty relating to a dishonest expert are, I should imagine, vanishingly low, so no such inference is justified. Walton’s equivalent CQ is known as the trustworthiness question: ‘Is E personally reliable as a source?’

The issue of epistemic responsibility is, along with honesty, not an easy one for the layman to get to grips with. Of the possibilities for irresponsible behaviour which Brennan lists, for example, it is hard to see how a novice can learn that an academic has tried to avoid peer review, and is difficult to know whether his theories are ‘crack pot’. The former could be true when the expert has written for a national newspaper in order to engage with the public, thus avoiding the peer review process, and in the latter case, if his theories are assessed as ‘crack pot’, why are we even considering them? In this instance, Walton’s critical question seems more reasonable: ‘Is E’s assertion based on evidence?’; where we assume that responsible experts give evidence-based testimony and also that they endeavour to make that clear in their communications, since it is what differentiates them from armchair theorist and saloon bar know-alls.

The final criterion is expert consensus. Anderson’s statement that: ‘Once a consensus of trustworthy experts is consolidated, laypersons are well advised to accept the consensus even in the face of a handful of dissenting scientists’ (Anderson 2011, 149) is hard to disagree with. The argument that mavericks are sometimes right cuts no ice here: it may be true that an outlier will turn out to be the best theory, but the vast majority of them will not, and the novice has no business betting on unlikely outsiders when the field contains a clear favourite. The consensus may sometimes be wrong; but the novice can have no idea in what way it might be wrong. Again, Walton has a corresponding CQ: ‘Is A consistent with what other experts assert?’

Walton has one more critical question: ‘What did E assert that implies A?’ This illustrates the different focus of his concern: although some questions do refer to the status of the source as an expert, the overall goal is to assess the argument. At some point there has been a move from the actual words used by the source to the proposition which the champion of the source claims to have been asserted. This question tests both the integrity of that champion—have the experts words been twisted or taken out of context?—and their understanding of the topic and the language used in the original statement. It acknowledges not only that expert testimony may often be employed mischievously, but also that mistakes and misunderstandings when working out the implications of testimony delivered on subjects we are not masters of, in a technical language we are not familiar with, are a constant danger.

The difference between the CQs and a set of practical criteria should be further elaborated upon. While the CQs can be asked in reality, they do not have to be, and the burden of proof works a little differently. For instance, when I accept your argument from authority, I am accepting it presumptively on the understanding that your expert is being honest. I reserve the right to withdraw my acceptance the moment I discover a reason to believe that the opinion was delivered insincerely or fraudulently. I may pursue the points in the CQs, but I may also treat them as reservations which I retain when I give my assent and so accept the conclusion of the argument more conditionally than if there were solid evidence of the source’s good faith. When honesty is used as a criterion of assessment, however, a lack of evidence means we cannot make a judgement; we know that our expert should be honest, but we simply don’t know if he actually is.

The essential difference is that the concept of the critical question is born of the tradition of dialectic: a situation is imagined in which one person is putting forward an argument and another is either accepting or rejecting it. This is rather different from the position of one individual trying to decide whether or not a particular expert is worthy of belief.

What Chance for a Meta-Stance?

The way forward which Brennan suggests is summed up thus: ‘By drawing on the literature of metacognition, we can add another criterion aimed at helping novices to assess experts: a criterion aimed at pushing novices to make second-order assessments of their own capacities to judge expertise’ (234). To put it another way, we can avoid biases in our judgements by making judgements about those judgements. This requires a great deal of intellectual honesty.

Brennan goes on to say: ‘metacognitive strategies are important because they are explicitly self-reflective. They force us to look at ourselves as inquirers. They force us to assess ourselves’ (234). This is undoubtedly a good thing, and will certainly lead to better reasoning, about experts and everything else. However, the problem with this as an answer to the difficulty in choosing experts is also expressed in the following line: ‘There may be some room for disingenuous rationalization’ (234-35)—I suspect there will be rather a lot. The solution to this, is, interestingly, dialectical. Those who refuse to examine themselves properly, to look at their judgements honestly ‘can no longer claim innocence. They will be guilty of a form of dialogic irrationality’ (235). There are two things to say about this move.

Firstly, it clearly narrows the distance between the approaches of assessing expert opinions and assessing arguments which appeal to them. Secondly, in order to have a practical effect, it relies on a desire to be rational. Not being honest with ourselves is to: ‘give up on any semblance of epistemic autonomy. It is to give up further social goods (such as the esteem of others) associated with being a good inquirer. Insofar as we have internal reasons to be good inquirers, we have an internal reason to look for bias in ourselves, because bias threatens to undermine good inquiry’ (235). This is certainly true, but it’s questionable how much anyone cares, and it’s certainly questionable what social goods come with being a good inquirer—one might well argue that honest inquiry and epistemic autonomy are likely to have a high social price, especially if they lead us to unpopular or unconventional conclusions.

There is some cause for hope, however. Brennan notes that ‘Metacognition is a skill, one that must be learned and progressively habituated. It is its own form of expertise’ (236). The hope, then lies in education. This is a theme I return to in the concluding section.

Before that, though, I take issue with Brennan’s insistence that his criteria should be seen as an extension of Anderson’s. His own discussion shows quite clearly that they are not realistic due to the lack of available evidence, and his move towards a concept of dialectic rationality suggests that he is extending Walton’s approach instead. The meta-stance applies to the way in which we address the critical questions of the argument scheme and any discussion of those questions with a disputant is likely to show up our biases in a way that a simple search for information will not. In the section below, I discuss another dialogically based take on dealing with experts, especially when they are in conflict, which, if conducted on one’s own, would resemble closely the sort of introspection Brennan is advocating.

Building an Epistemic Hierarchy

My own system for the resolution of disagreements which feature statements from expert sources has something in common with Anderson’s in that it relies on the construction of an epistemological hierarchy (Hinton 2018b). However, this procedure can be applied to any type of ‘expert’, indeed, anyone who can be expected to have specialist or privileged knowledge on the topic in question. All aspects of the source are open to consideration, not only established social proxies for expertise such as degrees and titles. The background to this is, however, essentially dialectical; it assumes that two disputants are influenced by differing expert opinions and set out together to test which is to be believed. There is no reason, however, why one party cannot play both roles, since the process is a cooperative and collaborative one. The idea is not to decide whether a particular expert is trustworthy or not, merely to decide which of the available experts sits higher in the hierarchy upon which we all agree. If there were to exist any evidence of a record of fraud or deceit on the part of one candidate, then that source would fall in the hierarchy, but there is no requirement on either party to produce any evidence about the personal rectitude of the experts they assess. What matters is that they reach what is known in the pragma-dialectical approach to argument as ‘intersubjective validity’ (van Eemeren and Grootendorst 2004), that is, that all parties to the discussion are happy with the evidence used and the argument forms in which it is applied.

This process leads to an argument which can be schematized thus:

1. Source A attests that proposition X is true in field F.

2. Source A is higher in the epistemological hierarchy for field F than any other available source.

3. The testimony of the highest available source in the epistemological hierarchy should be accepted.

Therefore, I should accept that X is true.

The difference when the evaluation of expert sources is seen from the dialogical perspective is brought out in the paper where this procedure was first aired:

‘disputants cannot be expected to form a perfect epistemic hierarchy and thus solve all their problems. The purpose of the ordering process is to give their debate a direction in which to go, […] they set out to position sources relative to one another in order to guide them to the view which they should accept’ (Hinton 2018b, 89)

Rather than cheerleading for one expert or another, disputants, or bewildered individuals, are encouraged to work on building a hierarchy. This will involve putting forward arguments as to why one source deserves to rank higher than another, with the emphasis on convincing others and reaching an agreement acceptable to all. Of course, the types of arguments employed and the types of evidence offered will revolve around similar criteria to those Anderson puts forward, and, in that, such work is useful, but by making the process itself essentially argumentative we move away from the idea that the answers to questions of integrity, or responsibility, and so on, are out there, waiting to be discovered with the help of Google, and towards a situation where those with faith in differing sources are pushed to justify their favourite, not in terms of what degrees she has, but in terms of how she compares to another source who perhaps has a different opinion.


Informal logic is an unusually public-spirited branch of philosophy. It was born out of the dissatisfaction of teachers rather than theoreticians—though they are often the same people—of logic, a dissatisfaction stemming from the inability of formal logic to engage with and explain the realities of arguing and reasoning in our lives (Johnson and Blair, 2000). Informal logic, then, is, at least in its beginnings and still in part, a kind of social, public-facing philosophy, which aims to influence the reasoning of individuals and societies for the better. One way of doing this is through the teaching of critical thinking and another is via projects such as APPLY—The European Network for Public Policy Analysis—which aims to help ‘citizens understand, evaluate and contribute to public decision-making’ (APPLY 2020). The premise of such activity is that people need to be taught to reason better, but also that it is possible to teach them. This is important work – as Johnson and Blair famously put it: ‘People who decide for themselves what to think are regarded as more fully realized human beings than are people who accept unquestioningly what others say’ (1994, 167).

Undoubtedly, one of the skills we all need is the ability to deal with what purports to be expert testimony, and in particular, with conflicting testimonies. Since it is so very difficult to judge both the expertise and the sincerity of experts, it may be more profitable to teach citizens to judge the arguments which experts make on the basis of their expertise and those which others make by co-opting such statements in support of one or another standpoint.

Along with learning about argument forms, about evidence and inference strength, and about common errors in argumentation—the fallacies—it seems advisable to teach them to take a meta-stance towards their own decision making. In this way, elements from argumentation theory, social epistemology, and, indeed, cognitive science can be brought together to help. The question is how this kind of teaching can be made more available to the public. It is not enough to introduce such concepts only at university level, that is too little and too late. As Brennan states: ‘Greater emphasis ought to be paid to the benefits of developing metacognition. The earlier and more thoroughly we habituate our children in these practices, the better inquirers they are likely to be’ (236), a sentiment with which I whole-heartedly concur.

Contact details: Martin Hinton, University of Łódź,


Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme 8 (8): 144–164.

APPLY. 2020. European Network for Argumentation and Public Policy Analysis.

Brennan, Johnny. 2020. “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for (Reserved) Optimism.” Social Epistemology 34 (3): 227-240.

van Eemeren, Frans and Rob Grootendorst. 2004. A Systematic Theory of Argumentation. Cambridge: Cambridge University Press.

Goodwin, Jean. 2011. “Accounting for the Appeal to the Authority of Experts.” Argumentation 25 (3): 285-296.

Hinton, Martin. 2018a. “On Arguments from Ignorance.” Informal Logic 38 (2): 184-212.

Hinton, Martin. 2018b. “Overcoming Disagreement Through Ordering: Building an Epistemic Hierarchy.” Studies in Logic, Grammar and Rhetoric 55 (1): 77-91.

Johnson, Ralph, H. and J. Anthony Blair. 1994. Logical Self-Defense. New York: McGraw-Hill.

Johnson, Ralph, H. and J. Anthony Blair. 2000. “Informal Logic: An Overview”. Informal Logic 20 (2): 93-107.

Shanteau, James, David Weiss, Rickey Thomas, and Julia Pounds. 2002. “Performance-Based Assessment of Expertise: How to Decide If Someone Is an Expert or Not.” European Journal of Operational Research 136 (2): 253-263.

Walton, Douglas. 1997. Appeal to Expert Opinion. University Park, Pennsylvania: Penn State Press.

Walton, Douglas, Chris Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. New York: Cambridge University Press.

[1] All citations of Brennan refer to this work. Page numbers are given in brackets.

[2] There are many other approaches to this, of course, but Brennan’s work focusses on Anderson.

[3] These CQs featured in a number of works before being greatly expanded upon in Walton et al. 2008. The expansion, however, consists of sub-questions to those given here.

[4] See Shanteau et al 2002 for an interesting take on judgements of practical expertise.

Categories: Critical Replies

Tags: , , , , , , , ,

1 reply

  1. Are we talking novices as in those aspiring to be nuns and priests? Then, maybe.

Leave a Reply