Hunting the Expert: The Precarious Epistemic Position of a Novice, Jamie Carlin Watson

In Lewis Carroll’s poem, “The Hunting of the Snark,” ten adventurers set out to find an elusive, likely dangerous, and possibly mythical, creature called a “Snark.” They plot their course with a map that shows only ocean—no land—and their captain enumerates five criteria that have been passed down to him by which to identify the Snark. And yet, they discover that anyone who actually finds a Snark becomes just as elusory as the Snark itself … [please read below the rest of the article].

Image credit: jwyg via Flickr / Creative Commons

Article Citation:

Watson, Jamie Carlin. 2020.”Hunting the Expert: The Precarious Epistemic Position of a Novice.” Social Epistemology Review and Reply Collective 9 (4): 51-58.

PDF logoThe PDF of the article gives specific page numbers.

This article replies to:


Johnny Brennan (2020) makes a compelling case for supplementing Elizabeth Anderson’s (2011) criteria for how novices can identify experts with a “meta-cognitive” approach. I review Brennan’s argument for thinking there is a lacuna in Anderson’s criteria and his supplemental strategy, and suggest that Brennan has highlighted an important distinction between external and internal obstacles to identifying experts. While Anderson’s criteria aims to mitigate external obstacles to accurately identifying experts, Brennan’s meta-cognitive approach goes further, aiming to mitigate the internal obstacles. In response, I argue that Brennan’s approach faces its own limitations. I argue that novices who would use Brennan’s strategy must be disposed to use their epistemic energies differently than they currently do. And second, I argue that there is a third set of obstacles, that I call “ecological,” that neither Anderson’s nor Brennan’s strategies mitigate. Ecological obstacles are those that stem from relative differences in competence among novices and experts. Different novices will have different evidence at their disposal to assess an expert’s trustworthiness, and different types of expertise require different types of evidence depending on the novice assessing them.

In Lewis Carroll’s poem, “The Hunting of the Snark,” ten adventurers set out to find an elusive, likely dangerous, and possibly mythical, creature called a “Snark.” They plot their course with a map that shows only ocean—no land—and their captain enumerates five criteria that have been passed down to him by which to identify the Snark. And yet, they discover that anyone who actually finds a Snark becomes just as elusory as the Snark itself.

There are few bits of literature that capture so succinctly the current literature on identifying experts. We start with a set of ideals which we think real experts should exemplify: a strong fund of true beliefs in their domain, superb integrity, and immunity to the vicissitudes and limitations of human psychology (see Goldman 2001; Fricker 2006; Coady 2012). In reality, such a map is functionally blank, as few experts achieve them all to any high degree, everyone (not just novices) find it hard to judge character and psychological habits, and, even if they were fairly adequate at doing so, the would-be experts who fail to achieve one or more can easily cloak themselves in habiliments indistinguishable from real expertise.

So we turn our attention to land that is not on our map, that is, something in the vicinity of expertise, indirect criteria that might point us in the right direction—credentials, track records, confidence, consensus statements, plain, accessible language, admitting uncertainty and failures, and so on (see Goldman 2001 and Scholz 2009). These are often useful for finding some kinds of experts, such as accountants, lawyers, carpenters, family care doctors, and so on. But in those cases, the stakes are usually low. And we don’t always get it right. We sometimes choose frauds or people who find their way into an expert-level job despite their incompetence. When that happens, we pay the fine, live with a shoddy carpentry job, get a second opinion, and try harder next time. But as the issues for which we need expert ability or advice become more uncertain, more controversial, more political—as in the case of new, rare, or widespread diseases—indirect criteria become increasingly unreliable.

There is one fairly sure way to figure out who is an expert in a domain: Become an expert in that domain yourself. Of course, even if you had the time, interest, and resources, this would only help you in that one domain. Presumably your need for experts spans many domains. Further, becoming an expert in that domain would not help the novices you leave behind. They could not tell if you were a real expert, no matter how much you might plead with them, than they could identify other experts before you joined their austere ranks. You will have disappeared with the Snark.

Positioning the Novice

Against this dismal background, Johnny Brennan (2020) further elucidates how complicated the novice’s epistemic position is. He opens by explaining Elizabeth Anderson’s (2011) explanation of how novices can identify experts. Anderson argues that any adult with an ordinary education—a high school diploma and basic skill in navigating the internet—can assess enough information about a putative expert to make a judgment about whether they are worth trusting as an expert. This information, she contends, is the putative expert’s credentials, honesty, and epistemic responsibility, and whether the expert’s claims are consistent with the consensus among experts in that domain. Though this is a bit oversimplified, if a putative expert has high credentials in their domain (a terminal degree or relevant certification), if there is no evidence that they are guilty of intellectual dishonesty (in the form of plagiarism, fraud, conflicts of interest, etc.), if they don’t engage in dialogic irrationality (that is, they don’t perpetuate widely discredited beliefs), and if their views roughly align with the consensus of other experts in their domain, the novice has good grounds to trust that person as an expert.

Despite lauding Anderson’s strategy as “the best way forward,” Brennan notes some significant limitations. He argues that Anderson’s criteria are best suited for novices who already have the wrong opinion on a topic. If they have the right opinion, then, presumably, they are already trusting people who are trustworthy. But if they hold the wrong opinion, they are not likely to be able to use her criteria effectively. This is partly because domains of expertise are much more complicated than Anderson’s strategy make them seem and partly because novices who hold the wrong views face significant cognitive obstacles to changing their minds.

Domains of expertise are more complicated, Brennan explains, for at least three reasons.

First, expert testimony takes place against a rich background of information, assumptions, and methods of inquiry. Whether a claim makes sense from an expert’s perspective depends on their understanding of this background, which is the “kind that one only gets by being immersed in a culture” (Brennan 2020, 230; see also Collins and Evans 2007 on “linguistic immersion”). Since novices are not privy to this insider language and understanding, it is not clear how an internet search could help them.

Second, whether a putative expert is “on the outs” with others in her domain would not necessarily be obvious to a novice. Since the language of disagreement among specialists is largely technical, a novice couldn’t tell whether any disagreements were minor or major (and what seems like a major disagreement to scholars might matter very little in the broad scheme of a domain). The novice might wrongfully think a putative expert is not an expert when, in reality, other experts are taking their arguments seriously as a fellow scholar. Further, if the putative expert really has disgraced themselves, for example, because of some intellectual dishonesty or an idiosyncratic view of how to interpret a body of evidence, this is not necessarily something that would be presented on the internet as such. Real experts might just put them away quietly, by not publishing their work in respected journals and not giving them speaking time at conferences.

Third, novices would not necessarily know whether an expert is speaking outside their domain of expertise. For example, someone can be a scientist in one area but spend a good deal of time talking about a different area of science, or, as in the case of biologist Richard Dawkins, speak outside of science altogether to comment on religion and politics. The air of authority exuded by some experts allows them to “look competent and sincere” (Brennan 2020: 231, emphasis his) irrespective of whether they are.

In addition to the difficulties in simply understanding how well respected a putative expert is in a domain, novices who hold the wrong views face obstacles in their own belief-forming processes that militate against getting evidence that would dispel false beliefs. First, if they make an effort to really think about what an expert is and whether they are believing the right people (as opposed to just continuing to trust the people they’ve always trusted), they are beset by “cultural cognition” such that they are likely to choose putative experts who share their wrong beliefs or avoid putative experts would challenge their wrong beliefs. And second, even they overcome cultural cognition enough to pay attention to dissenting voices, cognitive biases like the Dunning-Kruger effect and bias blind spot operate to undermine their ability to accurately assess the claims of putative experts who disagree with them.

Ultimately, Brennan concludes that a novice who holds wrong beliefs can use all of Anderson’s proposed strategies for identifying experts “without employing tactics that would increase the chances of their success” (Brennan 2020, 233). What, then, is a Snark hunter with a bad map and an insufficient set of criteria to do?

Strategies for the Novice

Brennan draws on Karen Jones’s (2002) work to offer a two-pronged meta-strategy.

First, novices should keep a track record of when they trust some putative experts and distrust others. Then, when a putative expert they have trusted makes an astonishing claim, they can look back on their pattern of trust and distrust. If they recognize a reason to think their trust was based on an irrelevant factor, that should cause them to be suspicious of trusting this particular astonishing claim.

Second, novices can set themselves tasks before trusting a putative expert on a topic. For instance, they could make themselves give explicit reasons for choosing one expert over another. In simply laying out the reasons, novices expose them to possible criticism. Even if their reasoning is tainted by bias, simply expressing them can make them available for later scrutiny or introduce them into discussions with people who may be able to dispel those reasons to the novice’s satisfaction. They could also flag feelings of overconfidence or certainty, recognizing that these feelings can be the result of bias. By taking special note of beliefs that cause such emotional reactions, novices can consciously reduce the degree to which they hold that belief. Recognizing that a belief may be the result of bias again opens the door to later criticism.

Ideally, at some point, this meta-cognitive strategy will produce enough counterevidence to disabuse novices of their wrong beliefs. Of course, Brennan notes that this is largely aspirational. This is why he is cautiously optimistic about novices’ chances of identifying real experts. It requires a great deal of intellectual humility and other epistemic agents who are not afraid to disagree and push back on potentially false beliefs.

Mapping the Journey

Essentially, Brennan would like to teach Snark hunters how to draw a more accurate map while they’re on their journey. The meta-task of keeping a track-record of one’s beliefs and reasons serves as a baseline, landmarks that can be referred back to when more information is gathered. The more landmarks one has, the easier to triangulate among them and, with adjustments here and there, ultimately get the proportions right—in this case, the right degree of belief in the right people.

To sum up, Elizabeth Anderson rightly notes that novices face external obstacles to assessing whether someone is an expert—identifying whether someone is likely to have a trustworthy view on an issue in their domain. And, as Brennan rightly explains, simply mitigating those obstacles is not enough. Novices also face internal obstacles in the form of gaps in necessary knowledge, cultural cognition, and cognitive biases. Therefore, in addition to overcoming well-known external obstacles, novices need to overcome these internal obstacles by drawing a better map on the fly.

Where does this leave our Snark hunters? Unfortunately, I think two important problems have yet to be addressed.

First, I think both Anderson and Brennan are overly optimistic about novices’ motivation to do their diligence in finding the right experts. Brennan’s strategies for drawing a better map are, to my mind, certainly on the right track. I think back to when I was young, with a 6-day creationist/anti-evolution worldview. I overcame those beliefs through a (much too) long process of piecemeal triangulating against new information, new arguments, and new people I came to trust for independent reasons. But crucial to that process was that (a) I really wanted to know whether I was right about such things, so I was willing to test my beliefs, and (b) I was convinced that paying careful attention to people who disagreed with me was the way to find out. Brennan acknowledges the intellectual humility needed for his meta-cognitive strategy, but I think his cautious optimism is warranted only if novices with the wrong beliefs exhibit both (a) and (b). It is worth pointing out that the Snark hunters in Carroll’s poem looked at the blank map and were “much pleased when they found it to be a map they could all understand.” If you think your current map is accurate, it is unlikely that you’ll spend time evaluating others. To be sure, like the traits required to engage in Brennan’s meta-cognitive approach, these traits can be cultivated. But I am not sure what strategy—apart from being raised with a certain set of intellectual values—might be prescribed to help novices cultivate them.

Second, in addition to internal and external obstacles, I think there is a third set of obstacles created by differences in types of expert domains and varying degrees of expert authority. I will call these obstacles “ecological” because they refer to the various conditions under which experts live and operate. Ecological factors affect whether a novice is justified in regarding an expert as trustworthy, even if they are a real expert. Consider just two distinctions (among several) that can influence a novice’s justification for trusting an expert.

Justifying Trust

The first distinction is between low performance ceiling domains and high performance ceiling domains (identified by psychologists Camerer and Johnson 1991). Expert domains differ in the degree to which expertise can be attained. Consider the expertise required to be a carpenter. One invests time into learning the basic concepts, materials, and skills. This may take years, but once one has acquired them, there is little else to develop. There is little variation in how well expert carpenters pour a foundation or set rafters. In other words, there is an upper limit to carpentry expertise; it has a relatively low performance ceiling. Playing the violin, on the other hand, is a high performance ceiling domain. The time and effort needed to become an expert is extensive, and there is no upper limit to how good one can be. There are always new techniques to perfect, new music to excel at or write.

If novices do not recognize the differences in performance ceilings, they are likely to view all expert authority alike—either substantial or insignificant. Consider the research that suggests that licensed psychologists have no better results from counseling than minimally trained counselors. Based on this evidence, one might assume that psychologists are not experts—since “non-experts” can do just as well as they can. However, a different conclusion may be that “counseling expertise” has a low performance ceiling. While it is a skill that requires training, it may not require Ph.D.-level training. Further, this does not mean that psychologists do not have expertise in aspects of their domain aside from counseling, such as human development, psychological testing, non-counseling therapies, or psychological research. It would be important for novices to know whether they could trust, for example, a clinical social worker with counseling training as much as a psychologist with a Ph.D. But they would need to know more about that domain of expertise to be able to decide.

A second distinction that can impact a novice’s justification for expert trustworthiness is between experts who operate close to a novice’s understanding of the world—what philosopher Thi Nguyen (2018) calls their “cognitive mainland”—and experts who operate on what Nguyen  calls a “cognitive island.” A novice’s cognitive mainland is the competence and skill that novices have and that they use to make sense of the world around them. For example, most novices understand enough about what landscape designers do to assess their competence. They can usually find reviews of their work online. They can even go look at some of their work for themselves. Even if they don’t know much about horticulture, they know whether a yard looks nice. Landscaping expertise is close to most novices’ cognitive mainland.

But novices vary widely. A novice in one domain can be an expert in another. So, expertise close to one novice’s cognitive mainland can be further away from another’s. What mortgage brokers do is not as close to most of us as landscapers; it is farther out to sea, as it were. First-time home buyers need a lot of time to learn the language associated with the mortgage industry and what it means for them. On the other hand, a real estate agent will not find the mortgage broker’s expertise so far from their own. They already have most of the concepts and skills needed to assess a mortgage broker’s authority. The farther out an expert domain is from a novice’s mainland, the more likely they are on what Nguyen calls a “cognitive island,” isolated from resources that would let novices make sense of their abilities and authority. For most of us, the expertise of a particle physicist or astrobiologist is on a cognitive island.

If novices do not realize how far from their mainland an expert is, they may have difficulty assessing an expert’s trustworthiness. They may presume they can assess it when they lack the tools to do so. Or they may assume it is too far from their reach when they could use some simple strategies for mitigating the distance. One such strategy is to use other experts in closely related fields to help them make sense of it. If they choose experts close enough to their own mainland and find them trustworthy, those experts can then help them judge the trustworthiness of experts who are further from the novice but closer to them. In the case of mortgages, for example, they might have a friend who works in real estate or someone in banking to help translate the relevant bits to us in a way that meets their need. But, again, novices must have a sense of this sort of difference in epistemic geography to know whether they need meta-experts.

Other distinctions make a difference, too, though space is too limited to develop them. For example, the distinction between types of expertise that are cultivated in what psychologist Robin Hogarth (2010) calls kind learning environments and those cultivated in what he calls wicked learning environments, the distinction between expertise on topics that are controversial and expertise on those that aren’t (see Matheson 2015: 128-131), and the distinction between expert advice and expert testimony (see Wiland 2018).

Framing Epistemic Trust

The vexing thing about this discussion is that, despite the seemingly innumerable obstacles to accurately identifying and trusting experts, many people do it passably well. Experts from various domains come together to build airplanes and skyscrapers and medical technologies. Though an expert in avionics may be a novice in materials physics, and vice versa, they nevertheless coordinate their expertise to build a plane that works. And novices justifiably rely on all these technologies to meet their needs. The interesting question is how they do it in an epistemically responsible way. I think a key for expertise studies going forward will be framing epistemic trust in experts in non-traditional ways. Rather than treating all novices alike—as if they all knew nothing—or all experts alike—as if they were uniformly authoritative—a richer picture of the overlap in domains must be developed, along with how competence in those areas of overlap can be used to assess expertise far beyond simple indicators like education, credentials, and experience.

For the Snark’s a peculiar creature, that wo’n’t
Be caught in a commonplace way,
Do all that you know, and try all that you don’t;
Not a chance must be wasted to-day!

Contact details: Jamie Carlin Watson, University of Arkansas for Medical Sciences,


Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme 8 (2): 144-164.

Brennan, Johnny. 2020. “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for Reserved Optimism.” Social Epistemology 34 (3): 227-240.

Camerer, Colin F. and Eric J. Johnson. 1991. “The Process-Performance Paradox in Expert

Judgment: How Can Experts Know so Much and Predict so Badly?” In Toward a General Theory of Expertise edited by K. Anders Ericsson and Jacqui Smith, 195-217. Cambridge, UK: Cambridge University Press.  .

Coady, David 2012. What to Believe Now: Applying Epistemology to Contemporary Issues.

Malden, MA: Wiley-Blackwell.

Collins, Harry and Robert Evans 2007. Rethinking Expertise. Chicago: University of Chicago Press.

Fricker, Elizabeth 2006. “Testimony and Epistemic Autonomy.” In The Epistemology of Testimony edited by Jennifer Lackey and Ernest  Sosa, 225-250. Oxford, UK: Oxford University Press.

Goldman, Alvin 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85-109.

Hogarth, Robin 2010. “Intuition: A Challenge for Psychological Research on Decision Making.” Psychological Inquiry 21 (4): 338-353.

Jones, Karen. 2002. “The Politics of Credibility.” In A Mind of One’s Own: Feminist Essays on Reason and Objectivity, 2nd ed.,  edited by L. M. Antony and C. E. Witt, 154–176. Boulder, CO: Westview Press.

Matheson, Jonathan 2015. The Epistemic Significance of Disagreement. New York: Palgrave Macmillan.

Nguyen, C. Thi 2018. “Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.” Synthese. 1-19. doi:

Scholz, Oliver 2009. “Experts: What They Are and How We Recognize Them—A Discussion of Alvin Goldman”s Views.” In Reliable Knowledge and Social Epistemology: Essays on the Philosophy of Alvin Goldman and Replies by Goldman edited by Gerhard Schurz and Markus Werning, 187-208. Amsterdam: Grazer Philosophische Studien.

Wiland, Eric 2018. “Moral Advice and Joint Agency.” In Oxford Studies  in Normative Ethics, Vol. 8,  edited by Mark C. Timmons, 102-123. Oxford, UK: Oxford University Press.

Categories: Critical Replies

Tags: , , , , , , ,

Leave a Reply