Author Information: Lacey J. Davidson, Purdue University, davidsl@purdue.edu; Daniel R. Kelly, Purdue University, drkelly@purdue.edu
Davidson, Lacey J. and Daniel R. Kelly. “Intuition, Judgment, and the Space Between: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 11 (2015): 15-20.
The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2sf
Please refer to:
- Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852.
- Alfano, Mark. “Becoming Less Unreasonable: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 59-62.
- Sherman, Ben. “(Less Un-) Attainable Virtues: A Response to Alfano.” Social Epistemology Review and Reply Collective 4, no. 10 (2015): 14-18.
Image credit: boingboing
“And now that you don’t have to be perfect, you can be good.” —John Steinbeck, East of Eden
Sherman (2015) agrees with Fricker (2007) that there is a problem, but disagrees about what to do about it. The problem is epistemic injustice, failing to take others as seriously as they deserve to be taken in matters epistemic, a form of miscalculation whose effects can not only stabilize and strengthen types of injustice that already exist, including types of inequality, prejudice, discrimination, but that can potentially produce new forms of injustice as well. Fricker sketches an account of testimonial justice, a virtue that, when properly nourished, she claims can serve as a corrective to epistemic injustice. Sherman is skeptical on many counts: that attempts to achieve testimonial justice would help —indeed, he worries that they would actually backfire and hinder the fight against epistemic injustice; that testimonial justice as depicted by Fricker is in fact a virtue; and that the virtue-theoretic framework via which Fricker expresses her notion of testimonial justice is generally viable anyway. Our reply to Sherman is broadly Frickerian in spirit, though the position we take departs Fricker’s own.
Blind Spots
The core of Sherman’s concern has to do with the kind of blind spot a person is likely to have about her own cognitive shortcomings, and the lack of resources she will have to correct them if they are discovered. Throughout the article, Sherman relies on a reasonable epistemic claim: in general, we think our beliefs are fair and accurate; otherwise, we would no longer have these beliefs. Indeed, at one point he invokes Moore’s paradox, which seems to have the unsetting implication that some self-involving epistemic blind spots aren’t just likely but inevitable (on pain of irrationality, anyway); as he notes on page 12: it doesn’t make sense to claim, “I believe that p, but it is likely that p is false.” Sherman uses this point to mount a challenge to Fricker’s proposed remedy of testimonial justice: if we had the capacity to identify when we were making this mistake, we wouldn’t make the mistake in the first place. In this case, we wouldn’t commit epistemic injustice, and so wouldn’t need testimonial justice to correct it.
We feel the force of the concern, but believe the ameliorative outlook is not as dire as Sherman’s criticism suggests. Let us start with a concrete example of our own, one that turns on the distinction between implicit and explicit biases (Sherman 16-17, see Banaji and Greenwald (2013) for a recent overview of the psychological research). Imagine that hearer A has no negative explicit biases against those who are members of group Y and, therefore, no negative explicit biases against speaker B, who is a member of group Y, based on her membership in group Y. Hearer A does, however, have negative implicit biases against speaker B, due to A’s negative implicit biases against members of group Y. When hearer A receives B’s testimony, her implicit biases make it intuitively, pre-reflectively seem to her that B’s credibility is lower than it should be. This intuitive assessment causes A to judge that B’s testimony is less credible than it deserves to be taken.
Now, let us complicate this picture by imagining that A has learned about the nature of implicit biases, and now knows that it is likely that she holds an implicit bias against B. A knows that her implicit biases are likely to cause her to give an initial credibility assignment to B’s testimony that, despite the pre-reflective, intuitive plausibility with which the assessment initially presents itself, is lower than it deserves, lower than it would be in the absence of implicit biases. A is now in a different situation; her new self-knowledge appears to undercut her initial biased credibility assignment, meaning she must revisit that assessment. Imagine that A does so, and adjusts her final judgment based on other characteristics of B, such as B’s expertise, general reliability, or other factors that she holds should affect testimony credibility.
Notice that on this story A’s pre-reflective seeming—her intuition—that B deserves a low credibility judgment persists; such is the way with implicit biases: they are recalcitrant. However, this need not continue to spoil A’s final judgment of B’s credibility. That is, A need not eradicate the implicit bias or the intuition that it produces from her mind in order to arrive at a better final assessment of B. In this case, rather, A mitigates the influence of that implicit bias by correcting what she understands to be its distorting effect on how credible B’s testimony initially seemed to her. Her distorted initial assessment and mitigating corrective together result in a more complicated final credibility judgment, and one that she can stand behind and avow.
Potential Responses
We have a couple of points to make about this case and how Sherman might respond to it. First, A’s final credibility judgment of B changes when she recognizes her mistake; or more precisely, once she recognizes that in cases involving Y’s her initial assessment is likely to be influenced by factors she finds to be illegitimate, and so she recalibrates her judgment accordingly. As we noted above, however, implicit bias and its illegitimate influence is not thereby eradicated, but persists. In other words, an important part of A’s initial evidence for her final judgment, namely her intuitive assessment that B does not seem to be very credible, remains even in the face of her recognition that it is the result of an illegitimate influence.
Second, this picture points to an important distinction between what we have been calling variously, the intuition, or pre-reflective seeming, or initial credibility assessment, on the one hand, and the final, avowed credibility judgment, on the other. To simplify the discussion, from here on out we’ll use the terminology of the intuition, which, from a personal level point of view, one is given, and a judgment, which is more akin to an event, or a cognitive action that one takes. In our case, since A is aware of implicit biases, she realizes that her intuition about B’s credibility is likely to have been distorted by the influence of her own implicit biases, and so does not take the intuition at face value; rather, her judgment reflects both intuition and her corrective calibration. Note that this distinction gives us the resources to show why Sherman’s invocation of Moore’s paradox might be oversimplified, and misleading. With our distinction in hand, we might say that A’s situation would not be adequately captured by her saying “I believe that B is not credible, but it is likely that B is credible.” The situation is better described as A having the intuition that B is not credible, or that it seems to her that B is not credible, but that her judgment is that B is, in fact, credible. (Indeed, many philosophers have drawn distinctions between importantly difference senses of “belief” and belief-like states that respect this kind of distinction, i.e. between mere belief and avowal, or credence and judgment; see Dennett 1979, Cohen 1992, McGeer and Pettit 2004, Ismael forthcoming).
On Determining Accuracy and Bias
This leads to our next pair of points, about difficulties in these cases for determining accuracy and correcting for implicit biases. For instance, in some places Sherman expresses worries that reflective thinking about credibility assessment may fail us, indeed that reflective thinking may not be as reliable as our pre-reflective thinking (9). He points to important differences in cases that are otherwise similar to those involving credibility, such as determining the comparative length of two lines despite the effects of the Muller-Lyer Illusion. In our terms, the intuition is that the lines are of different lengths, but once a person knows about the illusion, the distorting effects of the visual illusion can be taken into account, resulting in length judgments of greater accuracy. We agree with Sherman that cases like the Muller-Lyer Illusion are far easier to diagnose and correct, in no small part because we can know the circumstances in which they are likely to arise, and if pressed we can check for accuracy simply by pulling out a ruler and measuring; there is an agreed upon standard to measure and quantify length, and epistemically that standard is easy to access and apply to particular cases. Sherman rightly worries that in credibility assessment has no analogous metric; there is no handy credibility ruler.
So in both cases there is a) a distorting intuition that is persistent, even in the face of explicit knowledge of its inaccuracy or illegitimate sources, and even if one exerts direct cognitive control in an attempt to make it go away, but in both cases there is also b) space for recalibration between the intuition and the judgment. Our first point here is that the kind of recalibration allowed by the space mentioned in b) can be achieved by a number of different means, and those attempting to address epistemic injustice should avail themselves of the full menu of options. In particular, the more expansive notion of ecological control promises to be particularly useful, since it does not merely rely on the kind of in the moment “reflective thinking” about which Sherman expresses doubts (Clark 2007).
Ecological control is distributed, smeared out over time, and often incorporates elements and processes that remain pre-reflective, automatic, and conscious during the production of action. Agents can exert ecological control over their own physical movements, cognitive tendencies, and evaluative inclinations by marshaling both internal resources (practicing counter bias inferences until they are routinized and effective without reflective thinking) and external resources (surrounding oneself with people and information that are likely to mitigate the influence of implicit biases and other sources of epistemic injustice) in ways that avoid Sherman’s worries about reflective thinking. Which techniques are likely to be effective is a question psychologists continue to explore, but we hold that the notion of ecological control provides a fruitful and broad model for how to think about the different ways individuals can enter that space mentioned in b), taking control of and calibrating their judgment even in the face of persistent distorting intuitions. Indeed, the more inclusive notion of ecological control allows for a wide number of ways individuals can set things up and indirectly nudge themselves so that when they do have to act, their actions (including their judgments) are less demanding, easier to perform, and better, i.e. more likely to conform to their own pragmatic, epistemic, and moral ideals. (See Holroyd and Kelly (in press) for a discussion of ecological control with a specific application to implicit bias and character, and Brownstein and Saul for papers addressing many other issues arising at the intersection of implicit bias, epistemology, and moral theory.)
So we hold that credibility assessments are ecologically controllable, since judgments can be adjusted even if misleading intuitions cannot be fully eradicated (even if Naïve Testimonial Justice is not an ideal that is achievable or constructively pursued; see Lengbeyer 2004 and Kelly et al. 2010 for parallel discussions focused on racism and implicit bias rather than epistemic injustice and implicit bias). This leaves us with our second point about what exactly counts as accuracy in cases of credibility assessment. It would be foolish to deny that this is a difficult issue, or that the epistemic waters are indeed muddy. Given our lack of anything like a credibility ruler, how should an epistemically and morally responsible agent proceed?
Note first that in our case with A and B, we have been careful to describe A’s implicit biases from her own point of view, and as “illegitimate” by her own lights; the intuition they help produce, the initial seeming, is “distorted” given the ideals she explicitly avow and aspires to, namely an assessment of the credibility of B that is not sensitive to what she considers epistemically irrelevant factors like B’s skin color, or gender, or sexual orientation, etc. So A should have her own internal motivation to adjust her judgment. Moreover, assuming that the problems here are merely epistemic (and so rejecting radical anti-realism about credibility and trustworthiness, but also perhaps tolerating some vagueness), we can pose a question: if the perfection of complete and precise accuracy of credibility judgments is, as Sherman suggests, practically unattainable and possibly even counterproductive to pursue, how and how much should a person adjust her final credibility judgments in the face of her initial intuitions. If the ideals of Naïve Testimonial Justice and Total Accuracy are set aside, what should we aim for?
We hold that there are many options better than inaction (no adjustment or calibration whatsoever between intuition and judgment), and that some of those options are better than others. To wit: when assessing the credibility of individuals who are members of underrepresented minorities or with marginalized social identities, it is better to err on the side of over correction, and risk judging their credibility to be higher than an Ideal Epistemic Observer might advise. A case can be made for this claim on purely epistemic grounds (given the widespread nature of implicit biases, healthy corrective means are a smart bet), but arguments from pragmatic and moral grounds might be brought to bear as well. Given that members of those groups are likely to have born the burden of implicit bias and testimonial injustice for much of their lives (and the groups they are members of for much longer), erring in the direction of over-correction seems to the most fair and reasonable advice available.
Calibrating Judgment
Especially in cases like these where the epistemic waters are so muddy, we see no reason why such extra-epistemic considerations, perhaps considerations involving restorative justice, should not justifiably be given an amplified role in calibrating credibility judgments—which in this case militates in the direction of upgrading assessments of members of groups typically subject to implicit biases and epistemic injustice. That said, we are not under the impression that our discussion generalizes to every case or kind of epistemic injustice; but we do maintain that foregrounding the role of implicit bias in the generation and perpetuation of the phenomenon with which Fricker and Sherman are concerned can provide a fresh angle on it, and potentially lead to new ideas for how to address it. In particular, the notion of ecological control may dovetail nicely with attempts to address epistemic injustice from an institutional level, and avoid worries that too much focus on individuals and individual level psychology comes at the expense of ignoring collective and structural sources of injustice (see especially Haslanger 2012; also see Machery et al 2010 and Mallon and Kelly 2012 for arguments that appreciating implicit racial bias is not only compatible with accounts of social and structural aspects of racism, but can help inform and enrich those accounts).
Better for some of Sherman’s other concerns, thinking about the process of taking ecological control of one’s actions points in the direction of less explored models of character and virtue. As Merritt (2002) points out, Hume’s work is much more amenable to a picture of character that acknowledges its “sustaining social contribution,” and which is therefore the right fit for many the points we have made here (also see Holroyd and Kelly in press). A Humean model of character and virtue that incorporates the notion of ecological control may also avoid Sherman’s general worries about virtue epistemology in general, and about construing testimonial justice as a virtue in particular.
References
Banaji, Mahzarin R. and Anthony G. Greenwald. Blindspot: Hidden Biases of Good People. Delacorte Press, 2013.
Brownstein, Michael and Jennifer Saul. Philosophy and Implicit Bias, Volumes 1 & 2. Oxford: Oxford University Press (in press).
Clark, Andrew. “Soft Selves and Ecological Control.” In Distributed Cognition and the Will: Individual Volition and Social Context, edited by Don Ross, David Spurrett, Harold Kincaid and G. Lynn Stephens, 101-122. Cambridge, MA: The MIT Press, 2007.
Cohen, L. Jonathan An Essay on Belief and Acceptance. Oxford: Oxford University Press, 1992.
Dennett, Dan. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: Bradford Books, 1979.
Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press. 2007.
Haslanger, Sally. Resisting Reality: Social Construction and Social Critique. Oxford: Oxford University Press, 2012.
Holroyd, Jules and Daniel Kelly. “Implicit Responsibility Character and Control.” In From Personality to Virtue, edited by Jonathan Webber and Alberto Masala. Oxford: Oxford University Press (in press).
Ismael, Jenann. “On Being Some-One,” In Big Questions in Free Will, edited by Alfred R. Mele, Oxford: Oxford University Press (forthcoming).
Lengbeyer, Lawrence. “Racism and Impure Hearts.” In Racism in Mind: Philosophical Explanations of Racism and Its Implications, edited Michael Levine and Tamas Pataki, 158–78. Ithaca, NY: Cornell University Press, 2004.
McGeer, Victoria and Philip Pettit. “The Self-Regulating Mind.” Language and Communication 22, no. 3 (2002): 281-299.
Machery, Edouard, Luc Faucher and Daniel R. Kelly. “On the Alleged Inadequacies of Psychological Explanations of Racism.” The Monist 93, no. 2 (2010): 228-255.
Mallon, Ron and Daniel Kelly. “Making Race Out of Nothing: Psychologically Constrained Social Roles.” In The Oxford Handbook of Philosophy of Social Science, edited by Harold Kincaid, 507-529. New York: Oxford University Press, 2012.
Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852
Categories: Critical Replies
Leave a Reply