Author Information: Ben Sherman, Brandeis University, email@example.com
Sherman, Ben. “Learning How to Think Better: A Response to Davidson and Kelly.” Social Epistemology Review and Reply Collective 5, no. 3 (2016): 48-53.
Please refer to:
- Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852.
- Alfano, Mark. “Becoming Less Unreasonable: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 59-62.
- Sherman, Ben. “(Less Un-) Attainable Virtues: A Response to Alfano.” Social Epistemology Review and Reply Collective 4, no. 10 (2015): 14-18.
- Davidson, Lacey J. and Daniel R. Kelly. “Intuition, Judgment, and the Space Between: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 11 (2015): 15-20.
- Washington, Natalia. “I Don’t Want to Change Your Mind: A Reply to Sherman.” Social Epistemology Review and Reply Collective 5, no. 3 (2016): 10-14
Image credit: wallsdontlie, via flickr
1. They rightly note that I have made an error when invoking Moore’s Paradox. On page 12, I say “people cannot think their beliefs are too far from being true without falling afoul of Moore’s Paradox: It makes no sense to say ‘I believe that p, but it is likely that p is false.’” Davidson and Kelly rightly point out that, on some theories of belief, it does make sense to say just that. If we accept, for instance, L. Jonathan Cohen’s distinction between belief and acceptance, my beliefs are involuntary psychological dispositions, and in some cases I might accept (as a matter of intellectual policy) that some of my beliefs are unreasonable, yet annoyingly persistent. As Davidson and Kelly point out, it is quite coherent to say that I can’t help believing a person is not very credible, while realizing that this belief is not well-founded, and so accepting an opposite verdict. Throughout most of my paper, I try to use language suggestive of verdictive creedal states, and when bringing up Moore’s Paradox I should have adjusted it accordingly, e.g. by using “think” or “judge” in place of “believe.”
2. They offer a position that they describe as, “broadly Frickerian in spirit,” but which nonetheless “departs Fricker’s own.” My major objections to Fricker are that (a) her proposed response to epistemic injustice depends on a virtue theory that is theoretically dubious, (b) she proposes a particular method for attaining the virtue that is likely to be practically ineffective, and (c) her overall strategy for becoming epistemically just invites dangerous presumptions that we are, or ever will be, able to reliably recognize the unbiased truth. Davidson and Kelly’s position, if broadly Frickerian, does not include any of these problems I object to in Fricker’s view, and so I welcome it as a fruitful proposal, and worthy of serious consideration.
I suspect we disagree about very little, and I might consider my own position to be broadly Frickerian in any context where I am not focusing on specific problems in Fricker’s views. My own stated view is that “our best chance of succeeding in avoiding or correcting prejudices is to remain vigilant about the kinds of prejudices we might be subject to, how to identify them, and how to correct for them.” My position wholly supports Davidson and Kelly’s view that “there are many options better than inaction (no adjustment or calibration whatsoever between intuition and judgment), and that some of those options are better than others.” If we find out, or even merely suspect, that our attitudes toward others are prejudicial and harmful, that is a compelling moral and intellectual reason to do something different, and also to realize that what we decide to do may prove to be counterproductive, and need to be rethought.
There are two major parts to Davidson and Kelly’s proposal: First, they apparently accept my worries that Fricker is too optimistic about individual people’s capacity to reflectively cancel out their own prejudicial biases. But there are various other ways we can try to neutralize our biases, such as attempting to exert ecological control over our judgments. As Alfano puts it, “Instead of changing myself (narrowly conceived), I can take control by selecting or designing my environment.” As all my commentators note, research into ecological control is in its early stages, but looks fruitful. Alfano refers to a promising longitudinal study by Devine et al., and Davidson and Kelly point to two books in press (Holroyd and Kelly, and Brownstein and Saul) as promising early steps in this field. In my own paper, I cite two studies that I think contribute to this literature as well (though I do not recall them using the specific term “ecological control”), those being Sassenberg and Moskowitz and Lai et al. Needless to say, I am highly in favor of this kind of empirical work examining what strategies are, and are not, effective for reducing our prejudicial biases.
As I mention when discussing a consequentialist approach to epistemic injustice, empirical research is, and will continue to be, an important guide for determining how to eliminate (or at least reduce) the impact of our unintended prejudices. (I imagine it is in a similarly results-focused spirit that Davidson and Kelly propose that a Humean model of the virtues might be more fruitful than the more Aristotelian model that dominates most virtue theory. I think this is right, if a Humean model of virtue can survive the situationist challenge to theories of virtue in general. In my response to Alfano, I suggest a very minimal notion of virtue, which I think would fit well with a Humean view, though a view so minimal may retain very little of what has traditionally made virtue theories appealing.
Second, Davidson and Kelly propose a revised version of Fricker’s “reflation” strategy: “when assessing the credibility of individuals who are members of underrepresented minorities or with marginalized social identities, it is better to err on the side of over correction, and risk judging their credibility to be higher than an Ideal Epistemic Observer might advise.” At the very least, I think this proposal is an improvement on Fricker’s strategy. I offer both theoretical and empirical reasons to think that we are unlikely to be accurate if we make a conscious effort to overcome our initial biases. Confirmation bias and anchoring effects will still be an obstacle for this strategy, and it seems likely that many people will under-correct even when they think they are over-correcting, but that is not, in itself, a reason not to try to overcorrect.
I also point out empirical reasons to worry Fricker’s reflation strategy would be counterproductive, though, including perverse results of reflecting on one’s own biases, moral credentialing, and rebound effects. These counter-productive tendencies seem at least as likely to plague Davidson and Kelly’s strategy. My impression of the research overall is that, if we are trying to use conscious (reflective, explicit, “slow”) thinking to counteract unconscious (pre-reflective, implicit, “fast”) processing, it is not very fruitful to try to counter unconscious judgments one by one; better to use ecological control, priming, framing, and other techniques to change the patter of unconscious processing. But if there is research to the contrary, I would be very interested to see it.
Davidson and Kelly might object, though, that my last argument was consequentialist in character, whereas they propose a more deontological reason for over-corrective reflation: “Especially in cases like these where the epistemic waters are so muddy, we see no reason why such extra-epistemic considerations, perhaps considerations involving restorative justice, should not justifiably be given an amplified role in calibrating credibility judgments—which in this case militates in the direction of upgrading assessments of members of groups typically subject to implicit biases and epistemic injustice.” I can see two reasons we should be leery of adopting over-corrective reflation as a general strategy.
First, the ways that the strategy is prone to being counter-productive will sometimes lead to further injustices of the same kind. Second, a given instance of over-corrective reflation will not always serve the purposes of restorative justice. Some forms of marginalization are exacerbated by overestimating individuals’ credibility. For one thing, consider that individuals in marginalized groups are often systematically under-informed about their rights and opportunities, for instance, and so may benefit from having their lack of credibility recognized on at least those topics. For another, consider the phenomenon of individuals claiming to speak for an entire marginalized group. In cases of these kinds, it is harmful to give a person’s testimony too little or too much credibility; so, we cannot assume that overestimation always serves corrective justice. Can we assume that it usually does? I’m honestly not sure. But if it turns out that reflation is not counter-productive, I would think overestimation would be a good rule of thumb, as long as it was tempered with an awareness of some of the situations where it is likely to reinforce injustice.
So, counterproductive effects are, I think, the biggest worry, and a reason not to simply replace Fricker’s reflation strategy with an over-corrective reflation strategy. But there may be another important distinction between Fricker’s strategy and the suggested adjustment: Fricker is not explicitly restricting the scope of her strategy to situations in which intuition and judgment clash. So, perhaps Davidson and Kelly, by focusing on a narrower range of cases, can avoid the problems associated with judging someone low-credibility at t1, and then explicitly reviewing that judgment at t2. (Of course, that would also mean that Davidson and Kelly’s recommended response would not come into play nearly as often.)
The difference is important since Davidson and Kelly suppose that, at the time we consider over-corrective reflation, we are already furnished with a judgment that our credibility judgment should be higher. But if we already have that (unprejudiced or anti-prejudiced) judgment, it is unclear to me that we should reflate anything at all. Let us consider two ways an intuition and judgment might conflict. First, let us suppose that intuitively we find a speaker’s testimony to be low-credibility, but we consciously judge it to be high-credibility. In this case, by hypothesis, we have reason to think our reflective judgment is more trustworthy than our intuition, so it is not clear we need to reflate anything; we should simply trust the more reliable faculty and ignore the other. But I take it they are more interested in the case where we intuitively find a speaker’s testimony to be low-credibility, and we do not have a first-order reflective judgment about the speaker’s credibility, but do have a reflective judgment that our intuition is prejudicially biased. Shouldn’t we try to reflate in this situation? The research I cited suggests that doing so might be counterproductive, but that research is not focused on situations where people have an explicit internal conflict between intuition and judgment. I would suggest, though, that it suggests, at the very least, that we are unlikely to successfully overcome our prejudices (for long) if we continue asking the same question that elicited the prejudicial response in the first place. If asking “How credible does this person seem?” led to a suspicious result at first, we will probably be fighting an uphill battle if we try to bring ourselves to think the person is more credible-seeming. But if we judge that our intuitions are unreliable, and we need an overall credibility judgment, our best course is probably to reframe the way we consider a question. Our prospects for success might not be great if we do not know about strategies that have been proven effective, but at least we will be avoiding strategies that we have already seen are likely to be ineffective: Trusting a biased intuition, and trying to explicitly adjust a biased initial judgment.
Alfano, Mark. “Becoming Less Unreasonable: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 59-62.
Brownstein, Michael and Jennifer Saul. Philosophy and Implicit Bias, Volumes 1 & 2. Oxford: Oxford University Press, in press.
Clark, Andrew. “Soft Selves and Ecological Control.” In Distributed Cognition and the Will: Individual Volition and Social Context, edited by Don Ross, David Spurrett, Harold Kincaid and G. Lynn Stephens, 101-122. Cambridge, MA: The MIT Press, 2007.
Cohen, L. Jonathan. An Essay on Belief and Acceptance. Oxford: Oxford University Press, 1992.
Davidson, Lacey J. and Daniel R. Kelly. “Intuition, Judgment, and the Space Between: A Reply to Sherman.” Social Epistemology Review and Reply Collective 4, no. 11 (2015): 15-20.
Devine, Patricia G, Patrick S. Forscher, Anthony J. Austin, and William T. L. Cox. “Long-Term Reduction in Implicit Bias: A Prejudice Habit-Breaking Intervention.” Journal of Experimental Social Psychology 48, no. 6 (2012): 1267–78.
Effron, Daniel A., Jessica S. Cameron, and Benoît Monin. “Endorsing Obama Licenses Favouring Whites.” Journal of Experimental Social Psychology 45 (2009): 590-593.
Follenfant, Alice and François Ric. “Behavioral rebound following stereotype suppression.” European Journal of Social Psychology 40 (2010): 774-782.
Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press, 2007.
Holroyd, Jules and Daniel Kelly. “Implicit Responsibility Character and Control.” In From Personality to Virtue, edited by Jonathan Webber and Alberto Masala. Oxford: Oxford University Press, in press.
Lai, C. K., M. Marini, S.A. Lehr; C. Cerruti, J.L. Shin, J.A. Joy-Gaba, A.K. Ho, B.A. Teachman, S.P. Wojcik, S.P. Koleva, R.S. Frazier, L. Heiphetz, E. Chen, R.N. Turner, J. Haidt, S. Kesebir, C.B. Hawkins, Benoît Monin, and Dale T. Miller. “Moral Credentials and the Expression of Prejudice.” Journal of personality and social psychology 81, no.1 (2001): 33-43.
Montieth, Margo J., Jeffry W. Sherman, and Patricia G. Devine. “Suppression as a Stereotype Control Strategy.” Personality and Social Psychology Review 2, no. 1 (1998): 63-82.
Macrae, C. Neil, Galen V. Bodenhausen, Alan B. Milne, and Jolanda Jetten. “Out of Mind but Back in Sight: Stereotypes on the Rebound.” Journal of Personality and Social Psychology 67, no. 5 (1994): 808-817.
Monin, Benoît and Dale T. Miller. “Moral Credentials and the Expression of Prejudice.” Journal of Personality and Social Psychology 81, no. 1 (2001): 33-43.
Pronin, Emily, Daniel Y. Lin, and Lee Ross. “The Bias Blind Spot: Perception of Bias in Self and Others.” Personality and Social Psychology Bulletin 28, no. 3 (2002): 369-381.
Schaefer, H. S., S. Rubichi, G. Sartori, C.M. Dial, N. Sriram, M.R. Banaji, and B. A. Nosek. “Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interventions.” Journal of Experimental Psychology: General 143 (2014): 1765-1785.
Sassenberg, Kai and Gordon B. Moskowitz. “Don’t Stereotype, Think Different! Overcoming Automatic Stereotype Activation by Mindset Priming.” Journal of Experimental Social Psychology 41, no. 5 (2005): 506-514.
Sherman, Benjamin R. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue is Not the Solution to Epistemic Injustice.” Social Epistemology (2015): 1-22. doi:10.1080/02691728.2015.1031852.
Sherman, Ben. “(Less Un-) Attainable Virtues: A Response to Alfano.” Social Epistemology Review and Reply Collective 4, no. 10 (2015): 14-18.
Uhlmann, Eric Luis and Jeffrey L. Cohen. “‘I Think It, Therefore It’s True’: Effects of Self-Perceived Objectivity on Hiring Decisions.” Organizational behaviour and human thought processes 104, no. 2 (2007): 207-223.
 Lacey J. Davidson and Daniel R. Kelly 2015.
 Sherman 2015a, 12.
 Cohen 1992.
 Sherman 2015a, 16.
 cf. Andrew Clark 2007.
 Mark Alfano 2015.
 Patricia G. Devine et al. 2012.
 Jules Holroyd and Daniel Kelly, in press.
 Michael Brownstein and Jennifer Saul, in press.
 Sherman 2015b, fn 14-15.
 Kai Sassenberg and Gordon B. Moskowitz 2005, 511.
 C.K. Lai et al. 2014.
 Sherman 2015a, 4.
 Sherman 2015b.
 Sherman 2015a, 15.
 Emily Pronin, Daniel Y. Lin, and Lee Ross 2002, or Eric L. Uhlmann and Jeffrey L. Cohen 2007.
 Benoît Monin and Dale T. Miller 2001; Daniel A. Effron, Jessica S. Camerson, and Benoît Monin 2009.
 C. Neil Macrae et al. 1994; Margo J. Montieth, Jeffrey W. Sherman, and Patricia G. Devine 1998; Alice Follenfont and François Ric 2010; cited on Sherman 2015a, 15.
 cf. Miranda Fricker 2007, 91-92.