Author Information: Nancy Daukas, Guilford College, firstname.lastname@example.org
Daukas, Nancy. 2012. Comments on Karen Jones, ‘The Politics of Intellectual Self-Trust’ Social Epistemology Review and Reply Collective x (x): x-x.
Please refer to:
- Daukas, Nancy. 2012. Comments on Karen Jones, ‘The Politics of Intellectual Self-Trust.’ Social Epistemology 26 (2): 237-251.
Reading Karen Jones’ paper, “The Politics of Intellectual Self-Trust” has stretched and deepened my thinking on epistemic politics, epistemic psychology, and trust; anyone with an interest in those areas should read it!
In this paper, Jones brings together her long-standing interests in trust and in the politics of credibility, and builds on her earlier work on those areas (see esp. 1996 and 2002). The paper develops a compelling argument and a focused, carefully thought-out account of self-trust, which takes the some “mixed” cognitive/affective approach to self-trust as Jones takes toward other-trust (1996). The paper goes on to chart the interactive relation between self-trust and the perpetuation, and possible disruption, of epistemic injustice. I’ll sketch the view and its primary argument, and raise several questions along the way.
According to Jones (2012), “intellectual self-trust” (in a domain D) is an attitude of optimism about one’s cognitive competence” in D (242-243). It is “appropriate if and only if one’s domain-relativized optimism matches one’s domain-based competence” (243). The paper focuses on the former (what self-trust is) rather than the latter (when it is appropriate). An agent who trusts herself intellectually in D will be disposed to: feel confident regarding her cognitive abilities in D; rely on her D-related epistemic “mechanisms and methods” (240) even when the possibility of error brings risk; privilege her own D-related judgments in circumstances of conflicting judgment; make D-related assertions with confidence. And her self-trust will (at least partly) determine the character of her epistemic meta-reflection: excessive distrust will lead to excessive second-guessing and overly harsh self-assessments, excessive trust will lead to excessive optimism and inadequate self-monitoring (all duly qualified, i.e., in a domain).
Jones develops her argument by showing that it works where others don’t, to give us what we should want from an account of self-trust. According to the first contender—the “simple dispositional model” (240) — to trust oneself intellectually (in D) is to have a set of dispositions to rely on one’s own epistemic methods and mechanisms (in D). While Jones agrees that if agent A trusts himself intellectually, then he will have that set of dispositions, she argues that an appeal to such dispositions alone doesn’t capture the kind of self-trust we are looking for, which is the self-trust involved in epistemic agency. My dog is disposed to fully rely on her epistemic mechanisms, but she isn’t an epistemic agent thereby exhibiting self-trust. For this, there needs to be “a gap between the exercise of the mechanisms and belief — a space that opens up the possibility of epistemic agency” (239). Epistemic agents are able to evaluatively self-reflect; the self-trusting epistemic agent’s dispositions to self-reflect express her self-trust.
On the second view — the purely cognitive model — to have intellectual self-trust in a domain is “to believe or judge that your mechanisms or methods of inquiry in that domain are reliable” (240). The self-trust is “appropriate” iff those beliefs are more or less correct. Jones argues that this cognitivist model cannot account for cases where beliefs are correct but fail to grasp onto reliance-dispositions. You can believe that you are cognitively competent in a domain, and nonetheless not behave in a way that expresses self-trust in that domain, since “cognitive habits lag behind reflective awareness” (238). Jones’ first example: you know that you’ve placed your passport in your bag, and yet you (pathologically) constantly re-check. The second: You know that research shows that women’s intellectual contributions are systematically undervalued in your field (and you believe that it should not be), and yet you go on (unintentionally) undervaluing them, because you do not see your belief about the research as a reason to doubt yourself. This sort of (patterned) mismatch between belief and epistemic dispositions wouldn’t be possible if the cognitive account of self-trust were correct.
What explains this mismatch, Jones argues, is that self-trust is an affective stance: affective states can affect perception, and particularly, they can affect whether or not you perceive something as a reason (in this case, whether or not you sees the research as a reason to distrust your assessments of women’s contributions to the field). The account of self-trust we need, like Jones’ earlier account of other-trust, is therefore a “mixed account” (242) — both affective and cognitive. This kind of account tightly links self-trust to the relevant dispositions, while explaining the mismatch that may occur between belief and self-trust.
I think that Jones is absolutely right that self-trust, like other-trust, involves an affective component. If someone asks you why you trust another, you may say “I just do”. If I don’t trust another, it might be just because “I just get a weird vibe from him”. Or you might express a belief in D with caution even though your level of competence warrants confidence, simply because you don’t feel sure of yourself, for no good reason (or, you may feel sure of yourself, for no good reason!) So: I think that Jones’ view goes a long way toward capturing the phenomenology of self-(dis)trust .
However, I am not sure that I am fully convinced by the argument against cognitivism (although I think the mixed account is right). I think it all depends on how we understand belief, and I think I have a different picture of belief than Jones does. Let’s return to the counterexamples: first, the pathological re-checker: should we expect an account of self-trust to track pathological departures from rational behavior? Or is the idea of pathology that it doesn’t follow normal patterns (here, patterns normal to the relation between belief and behavior)? I’m not sure.
I’m more clear on what I think about the second example (where you continue to trust your first-pass evaluative reactions to the relative merit of differently gendered colleagues’ work even though you have a belief that implies that you shouldn’t). A different take on the situation where someone relies on his cognitive capacities in D even though he “knows” that he shouldn’t is that he continues to harbor sexist beliefs (integrated with affective responses) at a “deep level”. (This brings us into conversation with indirect voluntarism: As with my patterned affective responses, so with my “deep” beliefs: I can change them only gradually, by rehabituating myself — e.g., repeatedly catching myself when I realize that I am “seeing” someone as “a-woman-therefore-someone-not-fully-competent-or-authoritative-in-D”, and listening to her more carefully as a result). If I am epistemically responsible, I will decide to rehabituate myself in order to adjust my (often affect-inflected) entrenched beliefs, when (as a self-reflexive agent) I recognize that “new information” requires that I do so. The resistance of “deep beliefs” (such as those that comprise a social ontology) to change is what we should expect if we see them as forming a system or “web” — changing one reverberates through the system and therefore requires a good deal of force. This is the sort of thing that underlies the cognitive strain of my account (2006), which Jones discusses in a footnote. But I should emphasize: my account there focuses on trustworthiness — on when trust is appropriate — without ever defining trust. This is one of the reasons that thinking about Jones’ paper has been worthwhile for me).
I’m also unsure about whether or not the account that Jones develops really is a cognitive/affective hybrid — at times in the paper, self-trust seems to be (just) a robust feeling of self-confidence. Self-trust is optimism toward one’s cognitive competence in a domain, where that optimism is understood to be an affective attitude toward one’s cognitive methods and mechanisms expressed dispositionally (as spelled out above). Jones argues that self-trust “does not require the belief that one is in fact competent, but tends to promote the very belief that would justify it” (245). So: the optimism isn’t linked to a belief (such as (thought cheerily): “I am good at this kind of cognitive task; I can do this easily!”), although the two (the belief and the optimism-affect) are likely to causally reinforce one another (unless “strong countervailing forces of cognitive habit” stand in the way) (243).
So my question is: where is the cognitive element of the account? I think it needs to be there: I don’t think we can really make sense of all of the dispositions through which self-trust is expressed as flowing from optimism that is not optimism tied to a belief or a belief-like attitude. A belief such as “if anyone can figure this out, I can” is an optimistic belief: it “comes with” feelings of optimism. The disposition to confidently assert my D- related beliefs seems to express an optimistic belief in my reliability. My reliance on my abilities where there is risk in error—which seems to me to be a central case of trust —seems to express my (cognitive) grasp that I am vulnerable, and my optimistic belief that my cognitive abilities can deliver.
One related worry: if I’ve got it right, Jones’ view of self-trust allows that one can believe oneself untrustworthy and yet trust oneself, and one can believe oneself trustworthy and yet not trust oneself. The latter isn’t hard to make sense of when self-esteem has never fully developed, or has taken lots of battering. The former is harder to make sense of. It certainly is a mark of a poor epistemic agent! But I would think that the belief (“I am not trustworthy in this domain”) would and should destroy the optimism. On the “deeply entrenched belief” picture of the mismatch between explicit judgment and actual reliance, I would have a residual belief that I am not trustworthy in D which well-considered judgment hasn’t yet displaced.
This case leads me to wonder whether we shouldn’t expect a disanalogy between an account of self-trust and an account of other-trust: I have an easier time making sense of the mismatch between a belief (“S is not trustworthy in D”) and an affective optimism regarding S in D, when there are two distinct agencies involved than when there is only one involved. I might think/feel: “even though S’s track record in D is mixed, she’ll come through for me,” but here I am trusting her as a friend. I expect her to be motivated by our friendship; on Jones’ (1996) account, I expect that she will be moved by my trusting her. But how could that work in the case of self-trust?
Finally, to turn to the relation between self-trust and testimonial injustice: remember, here we consider patterns integrated into normal testimonial practices through which members of socially subordinated groups suffer epistemically undeserved “credibility deficits” (see Fricker (2007)) in testimonial exchange (especially in domains where recognition of epistemic authority tends to reinforce power and prestige); members of socially privileged groups enjoy epistemically unearned credibility bonuses in that type of domain; the result is testimonial injustice suffered by the socially subordinated. Jones argues — and I wholeheartedly agree — that since all aspects of epistemic agency, including intellectual self-trust, normally develop through social/epistemic interaction, agents who are systematically treated as not fully deserving of epistemic respect—i.e., who suffer “credibility deficits” (in D) — are likely to become intellectually distrusting of themselves (in D); and it is rational of them to do so, given that the mechanisms of testimonial injustice are woven into the normal patterns of epistemic exchange through which we all learn to participate in epistemic communities. (But — as Jones indicates in her final paragraph while making a related point — it’s important to acknowledge that if those epistemically disrespected (in some communities/social contexts) are suitably supported in other communities/social contexts (especially “home” communities), their self-trust may hold steady. And they are positioned to become effective agents of change).
So: the marginalized are likely to develop excessive self-distrust; the privileged are likely to develop excessive self-trust; hence, in Jones’ words, the self-trust of both is “miscalibrated”: it doesn’t match their actual cognitive competences, or their actual relative cognitive competences (in D). Correcting for epistemic injustice, and purging its patterns from normal practices, thus requires “recalibrating intellectual self-trust”. (Here Jones continues the argument against cognitivism: meta-reflection alone — coming to form corrective evaluative beliefs about one’s cognitive capacities — doesn’t manage to grab onto behavior or “recalibrate” self-trust.) She suggests three stages that recalibration will require: we need to learn to recognize testimonial injustice and its effects on self-trust; we need to undo its effects “by actively disrupting the dispositions” that maintain it; and, through rehabituation, we need to “come to have the right affective attitude towards our cognitive competence in a domain” (247).
I think these three carefully articulated stages are all key: here Jones draws effectively on her earlier work in (2002). Here again, I want to note how much work the cognitive part of the account does, and suggest that meta-reflection is a more effective force for change than Jones suggests: I agree, as Jones says, that solitary meta-reflection is not very effective in actually changing a person’s habits and affective responses. But meta-reflection is rarely fully solitary: we gauge how we are doing cognitively (in D) in complex, patterned interactions through which we “read” others responses to us, in light of how we think they are doing cognitively. If we shift among communities — and I think this is key too — we’ll have to notice/feel differences in those patterns, and the dissonance between our different experiences should cause us (if we are moved by a genuine desire for truth and justice) both to recognize what’s going on, and to have a range of affective responses in tandem with (or as part of) that recognition, which should cause us to attend more consciously to what we are doing, and thereby to begin to disrupt dispositional patterns. So: although ingrained habits of mind/feeling are hard to destabilize, new feelings and habits are introduced by new recognition, which forms new beliefs and affects, and together they work on displacing the old ways.
Don’t get me wrong: I am not arguing for (pure) cognitivism. One of the recurrent strengths of Jones’ work is that it recognizes that features of agency and practice once thought of (in academic philosophy) as (properly) purely cognitive, in domains once thought of as (properly) purely cognitive, are (I would add also) affective to the core. Jones’ emphasis on the impotence of “pure meta-cognition” is important: if kids learn in school that “racist beliefs are wrong”, this day’s lesson is not going to undo their dispositions to have affective racist responses (to members of others’, or their own, racial groupings, depending on the character of their “home” communities) even if they believe that their teacher knows what she is talking about. So-called “color-blind” policies allow patterns of racist behavior and affect to simply continue.
My point is that this may be because “deep beliefs” to which those affects attach have not been displaced by what we learn “in theory”. Just as affective elements of our psychology run deep, so do cognitive ones.
Daukas, Nancy. 2006. Epistemic trust and social location. Episteme 3 (1-2): 109–24.
Fricker, Miranda. 2007. Epistemic injustice: Power and the ethics of knowing. Cambridge: Oxford University Press.
Jones, Karen. 1996. Trust as an affective attitude. Ethics 107 (1): 4–25.
Jones, Karen. 2002. The politics of credibility. In A mind of one’s own: Feminist essays on reason and objectivity, 2nd ed., edited by Louse Antony and Charlotte Witt, pp. 154–76. Boulder, CO: Westview Press.
Jones, Karen. 2012. The politics of intellectual self-trust. Social Epistemology 26 (2): 237-251.