I would like to begin by thanking Jamie Watson (2020) and Martin Hinton (2020) for their charitable treatments of my paper (2020) and their illuminating replies. They are right to even further temper my already reserved optimism about novices’ capabilities to reasonably defer to experts. I also must confess; I have no major disagreement with their criticisms. Both authors assert that we have precious little reason to suppose that novices will value good inquiry enough to be motivated to do their due diligence…. [please read below the rest of the article].
Brennan, Johnny. 2020. “Finding the Snark Together: A Response to Watson and Hinton.” Social Epistemology Review and Reply Collective 9 (6): 54-59. https://wp.me/p1Bfg0-58a.
🔹 The PDF of the article gives specific page numbers.
- Hinton, Martin. 2020. “Can Novices be Taught to Choose Trustworthy Experts? Optimism for Reasoning—A Reply to Johnny Brennan.” Social Epistemology Review and Reply Collective 9 (4): 65-71.
- Watson, Jamie Carlin. 2020.“Hunting the Expert: The Precarious Epistemic Position of a Novice.” Social Epistemology Review and Reply Collective 9 (4): 51-58.
- Brennan, Johnny. 2020. “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for Reserved Optimism.” Social Epistemology 34 (3): 227-240.
I would like to begin by thanking Jamie Watson (2020) and Martin Hinton (2020) for their charitable treatments of my paper (2020) and their illuminating replies. They are right to even further temper my already reserved optimism about novices’ capabilities to reasonably defer to experts. I also must confess; I have no major disagreement with their criticisms. Both authors assert that we have precious little reason to suppose that novices will value good inquiry enough to be motivated to do their due diligence. I think this is unfortunately right. They also both propose that any hope of cutting the Gordian Knot of assessing experts is to be found in emphasizing the social of social epistemology. Again, I wholeheartedly agree. So, I would like to use this response as an opportunity to expand on an aspect of my paper that did not get enough attention. Although I could have done a better job of making this clearer in my original paper, I think their suggestions fit nicely within the program I formulated there.
What’s Wrong with a Reliability Criterion?
Watson likens the expertise problem to Lewis Carroll’s poem “The Hunting of the Snark.” None of the hunters have seen a Snark, its distinguishing characteristics are ambiguous, and the map they use to find it isn’t very good—although they all think it’s perfectly clear. What’s more, the only crew member to find a Snark disappears. The poem makes for a stimulating allegory: to find an expert, novices must “disappear” from the land of the layman and become the very thing they are seeking.
Watson is right in that I think Snark hunters need to draw a more accurate map (2020, 54). Part of what is involved in making a more accurate map is doing what we can to ensure that our representations are cartographically accurate and not skewed by the filter through which we are looking (i.e., my metacognitive reliability criterion). Admittedly, this is awfully hard work. Although Watson did not explicitly say this, it requires that novices become a different kind of expert: not of the Snark, but of their own minds. Why think that novices will be motivated enough to do this hard graft?
Watson uses himself as an example. Despite being raised in a six-day creationist family, he was able to overturn those false beliefs. The process was long and arduous; the two things that carried him through were the high value he put on knowing the truth and the conviction he had that paying attention to those who disagreed with him was the right path to get there (2020, 55). Very admirable qualities, but not ones that are universal or even widespread among novices. In Watson’s words, “It is worth pointing out that the Snark hunters in Carroll’s poem looked at the blank map and were “much pleased when they found it to be a map they could all understand.” If you think your current map is accurate, it is unlikely that you’ll spend time evaluating others.” (2020, 55) It is deeply unsettling to think that we are wrong about the world (as Kant says, the dear self is everywhere we turn). So, we must deeply prize getting things right in order to overcome the—quite natural—avoidance of disorientation that comes with the realization that we are wrong.
Hinton follows the same track. I conceded that my reliability condition leaves some room for disingenuous rationalization. However, the quantifier “some” indicates a relatively small amount of wiggle room. Hinton thinks the opportunity for self-deceptive inquiry is quite high: “it’s questionable how much anyone cares, and it’s certainly questionable what social goods come with being a good inquirer—one might well argue that honest inquiry and epistemic autonomy are likely to have a high social price, especially if they lead us to unpopular or unconventional conclusion.” (2020, 68) As an aside, I think the social price would be lower if we collectively valued accuracy about the world more and stopped finding personal shame in being wrong. But this just underscores the broader point: we cannot succeed by changing beliefs; we have to change values. This is something we all agree on (including Anderson), and unfortunately changing values gets harder the older we get. Is this reason for optimism or pessimism? That is likely a matter of perspective.
In addition to the motivational problem, Watson adds on another category of obstacles that novices face—what he calls “ecological” obstacles, referring to the varied conditions under which experts operate (2020, 55). Watson outlines two. First, different domains of expertise have different “ceilings,” some low and others high. Domains with low ceilings have a finite set of basic concepts and skills. In domains like landscaping or accounting, there is only so much you need to master. On the other hand, domains with high ceilings seemingly have no upper limit to the new skills to learn or new directions you can take them. In music or neuroscience, the possibilities for innovation are endless. Without grasping this distinction, novices run the risk of viewing all expertise as either substantial or insignificant (Watson 2020, 55). They are likely to make the mistake of seeing low ceiling domains as not actually requiring expertise (since non-experts could perform just as well or nearly as well as experts). Expanding on Watson’s analysis, novices might confuse high ceiling domains as low ceiling domains—again putting experts and non-experts on the same level because they don’t recognize the difficulty of the domain (the “my opinion is as good as your knowledge” problem).
This leads seamlessly into the second ecological obstacle novices confront: There are varying distances between novices’ and experts’ understanding of the world. Drawing on the work of C. Thi Nguyen (2018), Watson argues that each novice inhabits a cognitive “mainland,” the competencies and skills they possess. Experts who reside close to their cognitive mainland are more easily assessed. But experts who are far away from their cognitive mainland—who inhabit cognitive islands—are harder to assess because they are “isolated from resources that would let novices make sense of their abilities and authorities.” (Watson 2020, 56) The broader point is that both novices and experts vary widely in their competencies and this makes the problem of how novices can assess and reasonably defer to experts much thornier.
Finding the Snark Together
Snark hunters do need a more accurate map. My misstep was implying, through disproportionate attention, that drawing a more accurate map requires only a self-focused look at how the map is potentially clouded by the lenses through which we look. It is going to require a communal, jigsaw-puzzle piece approach. I agree with Hinton that we should make the process of identifying trustworthy experts collaborative rather than competitive. The problem with constructing a hierarchy of expertise on one’s own is that it gets tied up with one’s ego and leads to an attitude of “cheerleading” for one’s preferred expert:
By making the process itself essentially argumentative we move away from the idea that the answers to questions of integrity, or responsibility, and so on, are out there, waiting to be discovered with the help of Google, and towards a situation where those with faith in differing sources are pushed to justify their favourite, not in terms of what degrees she has, but in terms of how she compared to another source who perhaps has a different opinion (Hinton 2020, 69).
Instead, Hinton suggests that we ought to construct a hierarchy of expertise dialogically, in cooperative conversation with other inquirers. I also agree with Watson that we need a richer picture of how the diverse competencies of experts and novices overlap—it is not the case that all novices are equally ignorant, nor is it the case that all experts are equally authoritative (Watson 2020, 57).
I’d like to gently push back on Watson’s metaphor of elusive Snark as expert. I think the expertise problem resembles less the hunting of the Snark and more the ancient Indian parable of the blind men and the elephant. As the story goes, six blind men approach an elephant in order to feel what it must look like. One touches its side and exclaims that it is like a wall; another touches its tusk and exclaims it is like a spear; a third thinks it is like a snake after touching its trunk; and so on. According to one of the most famous retellings by the poet John Godfrey Saxe (1873), the blind men began to argue after reporting what they felt:
And so these men of Indostan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong.
Though each was partly in the right,
They all were in the wrong!
In some versions of the parable, the blind men even come to blows because each assumes the others are lying about what they felt.
I think this story is a good allegory for the expertise problem for two reasons. First, it highlights the need for more humility in our investigations. Rather than assuming that our experience of the world is correct and experiences that diverge from ours are wrong (a psychological phenomenon known as naïve realism), we should remind ourselves that our perspectives are likely incomplete. What’s more, increased intellectual humility on part of the blind men would naturally lead to the next step of inquiry: combine their incomplete observations to zero in on a more accurate picture. Rather than the Snark standing as a metaphor for the expert—an entity that one seemingly cannot find without disappearing—an elephant is a better metaphor because it is too large to be observed directly by oneself but can, in theory, be grasped through a collective effort. As novices we are blind when it comes to investigating the claims of experts. We operate at an investigatory handicap, and a large one at that.
This is the second reason why the parable is a good allegory for the expertise problem: The blindness of the men can be read as a metaphor for trust. Trust, in order to be trust, must be at least partially blind (Hardwig 1991, 693) The blind men must trust their own senses, yes, but they must also trust their compatriots’ senses if they hope to gain a better understanding of something that they cannot grasp the whole of themselves. Hardwig argues convincingly that all knowledge, even that of the expert, requires trust in others. The expert is an expert partly because he often takes the role of the layman within his field (Hardwig 1985, 346) Not even experts have a complete understanding of every aspect of their experiments, and must trust their collaborators to add their jigsaw pieces to the overall puzzle. Similarly, novices need to cooperate with and trust other novices to come to a patchwork understanding of who the experts are. If we accept that novices themselves have diverse competencies and inhabit varying cognitive mainlands, then this seems like an attainable goal.
When it comes to reasonably deferring to experts, the standard is not, “assess the evidence for the proposition p.” The standard is rather “assess the evidence for the claim that the expert knows what she is talking about.” But even this is an elephant for any individual, metaphorically blind novice. It takes many novices, with varying levels of blindness looking at various aspects of expertise, to piece together the full concept of the elephant. If novices knit together their incomplete grasps of experts, their chances of assessing experts without having to become one themselves increases.
I don’t think the challenges presented by Watson and Hinton undermine my reserved optimism in metacognitive reliability conditions, if only for the following reason. Metacognition can itself be a collaborative activity (Hogan 2001). One application of metacognition is what I highlighted in my paper: a way of ferreting out the possibility that one’s own process of inquiry is not above board. Another application is more intersubjective: noticing how well others do, what their process is, what their strengths and weaknesses are, and so on. Metacognition involves not only being aware of your own cognitive weaknesses, but also how others’ strengths can compensate.
There is some empirical evidence to support the claim that engaging in collaborative metacognition can improve problem-solving skills, strengthen individual metacognitive skills, and build self-esteem (Goos, Galbraith, and Renshaw 2002; Molenaar, Sleegers, and van Boxtel 2014; Bernard and Bachu 2015; Yarrow and Topping 2001). Collaborative metacognition leverages the ecological differences between novices; to work to its maximal efficacy, collaborative metacognition needs members of different strengths and competencies, overlaying their cognitive mainlands together to form a bridge to far away islands (Yarrow and Topping 2001; Goos, Galbraith, and Renshaw 2002). This makes collaborative metacognition an essential component of building a successful elephant-discovery team. The ecological as well as motivational obstacles facing novices can be overcome if they can build the right coalition.
Keeping Looming Pessimism at Bay
Watson’s and Hinton’s replies underscore the importance of having a diverse group of social epistemological partners. If novices only associate with those who hold the same, incomplete perspectives, they won’t get any closer to reasonably deferring to experts. And yet, as I’m sure they’d agree, boosting novices’ chances of success by emphasizing the collaborative aspect of assessing expertise does not eradicate pessimism. Their proposed solutions run into the same problem as before—in order to work together to overcome ecological obstacles and construct non-competitive hierarchies of expertise, novices need to value the right things: collaboration, diverse perspectives, open mindedness, and pursuing the truth. If novices don’t already value these things, the suggestions posed by me, Watson, and Hinton won’t get them to. Worse yet, in our current age of hyper polarization, the value of diverse viewpoints is persistently undermined. This leads to the implication, which I think we see confirmed around us, that those who choose to place themselves in highly homogenous epistemic communities, where members are aligned in values and beliefs, have a harder time recognizing expertise that is further at sea.
Eradicating the possibility of pessimism is neither possible nor realistic. We should instead shift our focus away from trying to neutralize the pessimist and toward trying to garner enough support for optimism. Watson notes, in his own muted optimism, that “despite the seemingly innumerable obstacles to accurately identifying and trusting experts, many people do it passably well.” (2020, 56) This echoes Martin Hollis’ sentiment that trust works in practice but not in theory (1998, 1). This is not to say that we should not worry about the problem of expertise; many of us defer to experts passably well, but many of us don’t and that still calls for a solution. One heartening thought as to why many of us reasonably defer passably well is that trust by necessity involves many others. Trust occurs not in single strands but in whole webs (Baier 1996, 149), and perhaps the wider those webs extend and the more numerous their fibers, the less we have to wring our hands about getting the answer to the expertise problem exactly right.
Contact details: Johnny Brennan, Fordham University, email@example.com
Baier, Annette. 1996. “Trust and Its Vulnerabilities.” In Moral Prejudices: Essays on Ethics edited by Annette Baier, 130–51. Cambridge, MA: Harvard University Press.
Bernard, Margaret, and Eshwar Bachu. 2015. “Enhancing the Metacognitive Skill of Novice Programmers Through Collaborative Learning.” In Metacognition: Fundamentals, Applications, and Trends Vol. 76 edited by Alejandro Pena-Ayala, 277–98. Springer International Publishing.
Brennan, Johnny. 2020. “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for (Reserved) Optimism.” Social Epistemology 34 (3): 227-240.
Goos, Merrilyn, Peter Galbraith, and Peter Renshaw. 2002. “Socially Mediated Metacognition: Creating Collaborative Zones of Proximal Development in Small Group Problem Solving.” Educational Studies in Mathematics 49: 193–223.
Hardwig, John. 1985. “Epistemic Dependence.” The Journal of Philosophy 82 (7): 335–49.
Hardwig, John. 1991. “The Role of Trust in Knowledge.” The Journal of Philosophy 88 (12): 693–708.
Hinton, Martin. 2020. “Can Novices Be Taught to Choose Trustworthy Experts? Optimism for Reasoning–A Reply to Johnny Brennan.” Social Epistemology Review and Reply Collective 9 (4): 65–71.
Hogan, Kathleen. 2001. “Collective Metacognition: The Interplay of Individual, Social, and Cultural Meanings in Small Groups’ Reflective Thinking.” In Advances in Psychology Research, Vol. 7 edited by F. Columbus, 199–239. Nova Science Publishers.
Hollis, Martin. 1998. Trust Within Reason. New York: Cambridge University Press.
Molenaar, Inge, Peter Sleegers, and Carla van Boxtel. 2014. “Metacognitive Scaffolding during Collaborative Learning: A Promising Combination.” Metacognition and Learning 9: 309–32.
Nguyen, C. Thi. 2018. “Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.” Synthese. https://doi.org/10.1007/s11229-018-1692-0.
Saxe, John Godfrey. 1873. “The Blind Men and the Elephant.” CommonLit. Accessed May 15, 2020. https://www.commonlit.org/texts/the-blind-men-and-the-elephant.
Watson, Jamie Carlin. 2020. “Hunting the Expert: The Precarious Epistemic Position of a Novice.” Social Epistemology Review and Reply Collective 9 (4): 51–58.
Yarrow, Fiona, and Keith J. Topping. 2001. “Collaborative Writing: The Effects of Metacognitive Prompting and Structured Peer Interaction.” British Journal of Educational Psychology 71 (2): 261–82.
Categories: Critical Replies