Author Information: Rebecca Kukla, Georgetown University, firstname.lastname@example.org
Kukla, Rebecca. “A Further Look at Standards of Justification.” Social Epistemology Review and Reply Collective 4, no. 9 (2015): 63-65.
Please refer to:
- Freedman, Karyn L. “Testimony and Epistemic Risk: The Dependence Account.” Social Epistemology 29, no. 3 (2015a): 251-269.
- Kukla, Rebecca. “Commentary on Karyn Freedman, “Testimony and Risk: The Dependence Account.” Social Epistemology Review and Reply Collective 3, no 11 (2014): 46-52.
- Freedman, Karyn L. “Group Accountability Versus Justified Belief: A Reply to Kukla.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 6-12.
Image credit: Thomas Hawk, via flickr
Karyn Freedman (2015a, b) and I agree that standards of justification are interest-relative: How much evidence a belief requires in order to count as justified depends in part on the believer’s investment in being right, or the size of the epistemic risk she takes on in believing. We also agree that in the case of beliefs acquired through testimony, this interest-relativity affects whether someone’s word is enough to count as a justification. Across a few exchanges, we have disagreed over the consequences of this interest-relativity of testimonial knowledge.
In my first reply to Freedman (2015), I argued that we are faced with an iterative problem: My informant’s own beliefs, like mine, will be based on interest-relative justifications. I cannot settle my own standards of evidence except relative to my values and interests. But in turn, I cannot determine how much evidence someone’s word provides unless I know what her standards of evidence are. In turn, she must have assessed the standards of evidence of everyone from whom she inherited beliefs, and so I have to know not only what her standards are, but whether she was in a position to know the standards of those she relied upon, and so on. I suggested that this need to unearth an enormous web of values, interests, and standards poses a messy and deep problem in applied social epistemology. I tried to make this problem acute by focusing on the case of radically collaborative science, in which a wide network of researchers with very different expertise must somehow collectively produce a justified conclusion. Freedman, in her reply to my reply, accepts that this poses an interesting problem for collaborative science but denies that there is a serious problem for everyday knowers and knowledge. I will briefly explain why I find her response unsatisfying.
Freedman acknowledges that in collaborative science we need to worry about an “opaque fracturing of interests” (2015b, 10), wherein scientists find themselves dependent for belief-formation on a wide range of collaborators. These collaborators’ interests and (hence) evidentiary standards may be quite different from their own, as well as difficult or impossible to recover and assess. But, she insists, “outside of a group context the threat of a regress fades” (2015b, 7).
Before I get to the core of our disagreement, I should note that I am just not convinced that we are ‘outside of a group context’ very often. As Descartes fretted over at the start of the Meditations, a huge swath of my beliefs—perhaps most of them—rely on an inextricable and unrecoverable web of testimony. Almost everything that I believe depends in intricate ways upon all sorts of testimonially derived starting assumptions and framing beliefs. Everyday knowledge in general, it seems to me, is radically collaborative knowledge. But let us put that aside, and focus on clean cases of testimonial knowledge.
Freedman’s core claim is that there is a big difference in epistemic kind between the “overall objectives of a radically decentered research team,” which has concerns such as epistemic accountability, and the “theory of justification, which is centered on whether one’s own intellectual house is in order” (2015b, 10). She argues that unlike in the group case, an individual’s ‘intellectual house’ can be in order—her belief can be justified—even if she ends up relying on someone unreliable and acquiring a false belief, as long as she took on a low enough epistemic risk in believing. Freedman writes, “So long as I have met the evidential standards set by my interest in p, then, according to an interest-relative theory of justification, my belief that p will be justified” (2015b, 10). According to Freedman, this unreliability is just ‘bad luck,’ not a failure of justification (2015b, 11).
My basic worry is that Freedman still does not acknowledge that in assessing whether testimony justifies a belief, I need to take into account not only my own interests and their impact on my evidence bar, but those of my informant as well, as she will have needed to do in turn. Unless we know the other person’s interests as well as our own, the problem is not just that we might end up with a false belief—the problem is that we cannot know if her testimony meets our evidential standards in the first place. I don’t see how one’s intellectual house can be in order unless one has reason to think that the testimony one is relying on is reliable enough for one’s needs, even given that those needs are variable. Depending on someone unreliable is not ‘bad luck’ but a form of epistemic irresponsibility, if one could have known that her informant’s own stakes in the correctness of her belief were dangerously weak.
My positive point is this: Since every speaker has her own interests and evidence bar, there is no generalizable, default amount of reliability that the word of competent, sincere speakers has. Reliability, on Freedman’s view (and mine) has to be a product of competence, sincerity, and interests. So in deciding whether to take someone at her word, epistemic responsibility requires that I need a sense of all three. And in turn, the testifier will have needed a sense of the interests of those from whom she has inherited beliefs, and so on. The only times when I could responsibly take someone at her word, without having any information that would let me assess her reliability, would be when my own stake in being right is so low that any minimal bit of evidence will do for me. Only then could I responsibly skip caring what my informant’s own stakes were. But this is surely not the standard case.
This is not merely an abstract problem; I think it comes up regularly. For instance, surely we have all had the experience of asking someone for directions and being able to tell, from tone and body language, that the person’s stake in being right is so low that her word is not to be trusted, even though there is no particular evidence of incompetence or an intent to mislead. In this situation, you just care more about getting where you need to go than the person you asked does about directing you correctly.
More pressingly and perhaps more relevantly, most of us get many of our beliefs about current events from posts we see from friends on social media. We typically do exercise the kind of judgment I am talking about in these cases; we know that different people have different investments in different kinds of social and political and scientific facts, and we take that into account when deciding whether to believe something posted by a particular friend. I will not believe something my 80-year-old aunt posts about climate change because I don’t take her to be competent, and when my pathologically self-promoting colleague posts something about how all her students say her class changed their life, I don’t believe her because I don’t take her to be sincere. But I also won’t believe something my racist republican cousin posts about evidence exonerating a policeman who shot a young black male, even if I take my cousin to be basically competent and sincere, because I know his stakes in believing such things are way too high for my taste. The problem—and it’s a very real problem—is that even when I exercise such judgment, I can’t be sure that the poster exercised it in turn—and if she did, I can’t be sure that the person she got it from did as well, and so on. As purported facts fly around the Internet, even the most epistemically savvy and responsible among us find ourselves occasionally duped by misleading posts that seem reliable, but come at the end of an untraceably long chain of testimony. This is not just poor epistemic luck, but a failure of epistemic continence, albeit one that is well nigh impossible to avoid.
Freedman, Karyn L. “Testimony and Epistemic Risk: The Dependence Account.” Social Epistemology 29, no. 3 (2015a): 251-269.
Freedman, Karyn L. “Group Accountability Versus Justified Belief: A Reply to Kukla.” Social Epistemology Review and Reply Collective 4, no. 7 (2015b): 6-12.
Kukla, Rebecca. “Commentary on Karyn Freedman, ‘Testimony and Epistemic Risk: The Dependence Account’.” Social Epistemology Review and Reply Collective 3, no. 11 (2014): 46-52.
Categories: Critical Replies