Author Information: Mark Alfano, University of Oregon, email@example.com
Alfano, Mark. “Becoming Less Unreasonable: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 59-62.
Please refer to:
- Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852.
“I’m the most reasonable, responsible person here in Washington.”
That’s what John Boehner, the Republican Speaker of the House of Representatives, said in an interview with ABC News on November 9th, 2012. Whether you agree with Boehner or not, you might worry about anyone who endorses such a claim about themselves. No one is perfect, after all, and it’s likely that thinking of yourself as reasonable and fair in your opinions makes it harder to recognize and correct your own mistakes. In “There’s No (Testimonial) Justice” (2015), Benjamin R. Sherman raises a related concern about the pursuit of epistemic justice.
Miranda Fricker coined this term to name “a distinctively epistemic kind of injustice […] a wrong done to someone specifically in their capacity as a knower” (2007, 1). One such wrong is testimonial injustice, which occurs “when a prejudice causes a hearer to give a deflated level of credibility to a speaker’s word” (1). Fricker argues that, aside from the instrumental harm this can cause, by excluding people from “trustful conversation” (53) testimonial injustice harms them as givers of knowledge and hence as members of the epistemic community. Systematically distrusting someone is a way of casting them out of the group.
Naturally, many people would be dismayed to learn that they have participated in such exclusion. Fricker recommends the cultivation of corrective testimonial justice as an antidote. Someone embodies the ideal of naïve testimonial justice if they simply have no prejudices. Perhaps such innocents exist, but emerging evidence in developmental psychology suggests that pre-verbal infants become worse at distinguishing the faces and emotions of members of another race between 5 and 9 months in age (Vogel et al. 2012). If we end up with biases before we even learn to talk, the prospects of protecting naïve testimonial injustice are bleak. Instead, though, someone can follow Fricker’s advice to resolve to revise upward the credence he lends to members of stigmatized groups. As she puts it, “The guiding ideal is to neutralize any negative impact of prejudice in one’s credibility judgments by compensating upwards to reach the degree of credibility that would have been given were it not for the prejudice” (91-2).
This is where Sherman’s concern arises. While it would be good to achieve corrective testimonial justice, he argues that efforts in this direction are likely to fail or even backfire. The problem, in a nutshell, is that we’re all like John Boehner. We always think that our opinions are reasonable and that the credence we lend to others is appropriate. If you thought your credence in someone’s testimony was too low, you would already have revised it.
Of course, someone who explicitly endorses a biased view can revise that opinion, but many biases are implicit. Someone embodies a bias for Xs to the extent that she favors Xs in virtue of their being Xs; someone embodies a bias against Ys to the extent that she disfavors Ys in virtue of their being Ys. An explicit bias is one that the biased individual has some introspective awareness of, whereas an implicit bias is inaccessible to the biased individual’s consciousness. Someone who endorses the claim that women are less competent than men exhibits an explicit bias against women; someone who rejects this claim but nevertheless unknowingly associates competence more closely with men than with women embodies an implicit bias. The distinction between these two types of bias is cross-cutting, and biases come in degrees. For instance, someone could have a strong implicit bias in favor of members of one group despite a weak explicit bias against them, and someone could have a weak implicit bias in favor of members of one group while also harboring a strong explicit bias for them.
Someone who explicitly endorses anti-racist and feminist attitudes might nevertheless harbor implicit biases against racial minorities and women. By definition, such biases are not easily introspected. A sufficiently thoughtful and informed person might suspect themselves of having such biases, and there are now tests of implicit bias freely available online at www.projectimplicit.com and elsewhere. Sherman’s concern is that, even if someone adopted a policy aimed at reinflating the credence he assigned to members of stigmatized groups, he would not be in a position to recognize when to implement the policy or how much to adjust his credence. Indeed, the impressive literature on anchoring and adjustment (Epley and Gilovich 2006) suggests that under-correction is more or less guaranteed.
Within the virtue epistemology paradigm, it’s possible to respond to this problem by emphasizing epistemic humility rather than corrective testimonial justice. Sherman (10) is amenable to this suggestion, but I worry that epistemic humility may be even more difficult to cultivate than epistemic justice. Intentionally tracking how humble you are seems like a pretty bad way to become humble.
In the remainder of this post, I want to focus on two ways for virtue epistemologists to respond to Sherman’s paper: negative role models and going social (and distal). Sherman points out that a traditional tool of character development recommended by virtue theorists is the role model—someone the aspirant to virtue admires and tries to copy in thought, feeling, and action. Sherman argues persuasively that we are likely to choose role models who are epistemically unjust in exactly the same ways we are. Since I consider myself reasonable, I probably won’t consider someone very different from me a paragon of epistemic virtue. As Sherman (18) argues—and as Cassam (forthcoming) explores at greater length—it may be more effective to focus on avoiding vice than to try to cultivate virtue. But what good is a role model then? A negative role model for X is someone who is similar enough to X in important respects, whom X admires, but who also exemplifies vices to which X is vulnerable. X can empathize with his negative role model more easily than a positive role model, and X can use that empathic connection to better understand and flag moments when he’s susceptible to vice. One further benefit of using negative rather than positive role models is that there are so many more of them, and we tend to know them better than alleged positive role models. Asking yourself, “What would Jesus do?” may not be much help, but asking yourself, “What mistake would my dad make?” may be a useful corrective.
Negative role models, if they work, point to my other suggestion. In my own research (Alfano 2013, 2015a, 2015b, forthcoming a, forthcoming b), I argue that the individualism and independence presupposed by virtue theoretic approaches to ethics and epistemology are empirically untenable. Whether we like it or not, each of us relies on other people in hugely complex ways. Maybe what it means to be a virtuous person, given this, is to be appropriately disposed and suitably integrated into a material environment and a social milieu. If this suggestion is on the right track, then promoting epistemic justice could be achieved not only by changing the individual agent but also by ameliorating the material or social context.
Getting a friend to confront me when I might be acting in a biased way seems to help (Czopp et al. 2006), as does my confronting others when they seem to be biased (Rasinski et al. 2013). As McGeer (forthcoming) argues, being a member of the moral community seems to involve both being a valid target for such acts of holding responsible and being situated to hold others responsible oneself. Going to the trouble to confront bias—whether explicit or implicit—is a way of demonstrating one’s own commitment to norms of fairness, and therefore a way of putting oneself on the hook to be held responsible by others and oneself.
People may not be able to control their biases in the moment, both because they are hard to detect and because constantly exercising vigilance about one’s biases is cognitively exhausting. Another, more tractable, notion of control in the context of overcoming implicit bias is Clark’s (2007) notion of ecological control. Instead of changing myself (narrowly conceived), I can take control by selecting or designing my environment.
Research into the controllability of implicit biases is still at an early stage, but there are already some useful suggestions available. For instance, interventions that have shown some promise of mitigating implicit bias in a longitudinal study (Devine et al. 2012) include stereotype replacement (a proximal, higher-order strategy of recognizing, labeling, and replacing an initially negative stereotype activation), counter-stereotypic imaging (a distal, higher-order strategy of dwelling on real or imaginary counter-stereotypic exemplars), individuation (a wide-ranging strategy that involves seeking and obtaining specific information about members of stereotyped groups, in order to recognize differences among them), perspective taking (another wide-ranging strategy that involves imagining oneself into the shoes of a member of a stereotyped group, thereby reducing psychological distance), and increasing opportunities for contact (a distal strategy that puts one in a position to individuate members of the stereotyped group and have positive interactions with them).
Alfano, Mark. Character as Moral Fiction. Cambridge, UK: Cambridge University Press, 2013.
Alfano, Mark. “Ramsifying Virtue Theory. In Current Controversies in Virtue Theory, edited by Mark Alfano, 124-135. New York: Routledge, 2015a.
Alfano, Mark. “Friendship and the Structure of Trust. In From Personality to Virtue: Essays on the Philosophy of Character, edited by Alberto Masala and Jonathan Webber. Oxford, UK: Oxford University Press, 2015b.
Alfano, Mark. Moral Psychology: An Introduction. Cambridge, UK: Polity, forthcoming a.
Alfano, Mark. “Epistemic Situationism: An Extended Prolepsis.” In Epistemic Situationism, edited by Abrol Fairweather and Mark Alfano. Oxford, UK: Oxford University Press, forthcoming b.
Cassam, Quassim. “Vice Epistemology.” The Monist, forthcoming.
Clark, Andy. “Soft Selves and Ecological Control.” In Distributed Cognition and the Will, edited by Don Ross, David Spurrett, Harold Kincaid and G. Lynn Stephens, 101–22. Cambridge, MA: MIT Press, 2007.
Czopp, Alexander M., Margo J. Monteith, Aimee Y. Mark “Standing Up for Change: Reducing Bias Through Interpersonal Confrontation.” Journal of Personality and Social Psychology, 90, no. 5 (2006): 784–803.
Devine, Patricia G, Patrick S. Forscher, Anthony J. Austin, and William T. L. Cox “Long-Term Reduction in Implicit Bias: A Prejudice Habit-Breaking Intervention.” Journal of Experimental Social Psychology, 48, no. 6 (2012): 1267–78.
Epley, Nicholas and Thomas Gilovich. “The Anchoring-and-Adjustment Heuristic: Why the Adjustments are Insufficient.” Psychological Science, 17, no. 4 (2006): 311-18.
Fricker, Miranda. Epistemic Injustice. Oxford: UK: Oxford University Press, 2007.
McGeer, Victoria “Building a Better Theory of Responsibility.” Philosophical Studies, forthcoming.
Rasinski, Heather M., Andrew L. Geers and Alexander M. Czopp. “I Guess What He Said Wasn’t That Bad”: Dissonance in Nonconfronting Targets of Prejudice.” Social Psychology Bulletin 39, no. 7 (2013): 856–69.
Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852.
Vogel, Margaret, Alexandra Monesson and Lisa S. Scott. “Building Biases in Infancy: The Influence of Race on Face and Voice Emotion Matching.” Developmental Science, (2012): 1-14.