Author Information: Karyn L. Freedman, University of Guelph, firstname.lastname@example.org
Freedman, Karyn L. “Group Accountability Versus Justified Belief: A Reply to Kukla.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 6-12.
Please refer to:
- Freedman, Karyn L. “Testimony and Epistemic Risk: The Dependence Account.” Social Epistemology (2014): 1-19. doi: 10.1080/02691728.2014.884183.
- Kukla, Rebecca. “Commentary on Karyn Freedman, ‘Testimony and Epistemic Risk: The Dependence Account’.” Social Epistemology Review and Reply Collective, 3, no. 11 (2014): 46-52.
Image credit: jpellgen, via flickr
I am grateful to Rebecca Kukla (2014) for her generous and fair reading of my “Testimony and Epistemic Risk: The Dependence Account”. My concern in that paper is with the central epistemic question regarding the normative requirements for beliefs based on testimony; that is, whether a hearer has an epistemic right to believe what she is told in the absence of any evidence about the reliability of a speaker. An interest-relative theory of justification is my answer to this question. I argue that beliefs based on testimony require evidence for justification, but how much evidence is needed, in any given case, depends on the hearer and the epistemic risk she takes in believing that p is true. In other words, the evidential burden that an individual must meet in order to be justified in believing that p depends on how important it is for her that p is true, given her interest in p. The more she cares about p, the more evidence needed to justify her belief that p.
Interests and Normativity
In “Testimony and Epistemic Risk” (2014), I argue that we need to look at interests in order to assess the normative status of beliefs because evidence, on its own, can’t tell us how much evidence is needed in any given case. This view entails a normative asymmetry about justified belief, insofar as two individuals with identical evidence about p will not be necessarily in normatively symmetrical positions with respect to p; one may have a justified belief that p while the other might not.
By using interests to buttress standards of evidence, an interest-relative theory of justification helps us make sense of the fact that what we take as evidence, or reasons for belief, can have probative force in one case but not in another. Another virtue of this account is that it helps to explain the intuitive pull of both the credulist and reductivist positions in contemporary debates on testimony. If I am told that p and p is inconsequential to me, then the credulist principle of a presumptive right to believe (in the absence of defeaters) seems reasonable. If, on the other hand, the risk I take in believing that p is high, because for one reason or another I am invested in the truth of p, then, as the reductivist suggests, someone’s say-so is insufficient evidence that p.
I think this model of justification applies to all of our beliefs, however they are acquired, but it is crystallized in the case of beliefs based on testimony, not because we are more deeply invested in the kinds of things that we can learn from being told but rather because of the social nature of testimony. The fact that in testimonial exchanges the hearer must rely on the competence and sincerity of the teller means that the opportunities for error are more than in the usual case of first-person observation.
Addressing Inductive Risk
Kukla offers a number of thoughtful objections to this account, and her discussion of the inductive risk literature with respect to hypothesis acceptance in science presents an interesting counterpart to it. Indeed, it is apropos of this literature that Kukla raises her main concern about my account. Although Kukla accepts that justification is interest-relative (49, 2014), she worries that once we move in the direction of interest-relativity we encounter a complex and vicious regress of varying values, epistemic risk and evidence standards which threatens to undermine the justification of any one belief. As I shall argue, however, this worry is unfounded. The problem of varying evidential standards poses a legitimate concern in the case of collaborative research (in science and elsewhere), in particular when it comes to group accountability. However, outside of a group context the threat of a regress fades, and I am happy to have this opportunity to explain why this is so.
But first, I want to respond to a secondary concern that Kukla raises about my account. In (2014) I distinguish my interest-relative theory of justification from interest-relative accounts found in the pragmatic encroachment literature (e.g. Fantl and McGrath, 2002). I do so because I think that a focus on practical interests is problematically narrow. As I argue in (2014), I might care about a matter because it has an impact on some action that I undertake, but I might instead care about a matter because I am emotionally invested in it. That is, it may bear on my general happiness or well being, even if it has no impact on my actions.
Kukla is skeptical about my attempt to distinguish my position from these similar positions in the pragmatic encroachment literature. She looks at an example that I offer about George Santayana:
Imagine, for instance, that when I was a graduate student George Santayana was a philosophical hero of mine, but that I am no longer very interested in his work and have no on-going research relating to his philosophy. Still, suppose that I hold him in high esteem, until, that is, I hear that he was anti-Semitic. Suppose further that this little known fact about Santayana bears on my opinion of him, such that I sincerely hope that it is false. In this case, while I have no practical stake in the claim that Santayana is anti-Semitic I care about its truth and, again, really hope that it is false (8, 2014).
In (2014) I argue that in light of my high regard for Santayana, the epistemic risk I take in believing a testimonial report about his purported anti-Semitism is high. According to Kukla, however, “the dualism between emotional and practical investments seems pretty fragile and surface-level” (47, 2014). As Kukla sees it, if I am truly emotionally invested in Santayana and his purported anti-Semitism, then there will be at least some practical consequences of this fact on my behavior and decision-making. Maybe, as she suggests, I will not display his book cover on my office wall, or perhaps I will stop announcing to my students that he is my favourite philosopher, or maybe I will twist my hands anxiously when rereading my earlier papers about him (47, 2014).
Although Kukla is right that it is possible to squeeze some minimal practical consequences from nearly all of our beliefs, as an objection to broadening the notion of interests to encompass our general well being in addition to practical concerns, this misses its mark. In examples like the Santayana one, the practical consequences of my belief that p are unforeseen and, more importantly, unintended. They are simply not the reason why, in the first place, the epistemic risk I take in believing that p is high. To deny this is to diminish the legitimate role that our general well being plays in raising or lowering evidential standards for justified belief.
Suppose that I love the sport of tennis and that, in particular, my favourite player is Roger Federer. Imagine that, while at the grocery store on my way home from work, I overhear a woman say that Federer lost his French Open quarterfinal match earlier that day. Because of the enormous disappointment I will experience if this claim turns out to be true, I decide to reserve belief, to the degree to which I am able, until I can get home to my computer and double check this information. And that is just what I ought to do, given the raised evidential burden I face in light of the epistemic risk I take in believing that Federer lost is true.
My emotional investment in Federer’s success is why the epistemic risk I take in believing that p is high. No doubt that if p is true, there will be some incidental practical consequences to my believing that p. Perhaps I won’t bother tuning in to watch the final match. But since that is not the reason why the epistemic risk I take in believing that p is high, it does no explanatory work in accounting for the raised evidential standards I must meet in order to have a justified belief that p. If we want to be able to account for the epistemic risk believers take in cases like this one, then we need to broaden our notion of interests beyond the strictly practical.
Kukla’s main objection to my account has a different focus, and she develops it through a discussion of theories of inductive risk regarding hypothesis acceptance in science. There is a rich literature on inductive risk in science (e.g. Rudner 1953, Douglas 2000), and I am grateful to Kukla for drawing out the similarities between that work and my interest-relative theory of justification.
As she notes, the inductive risk theorist and I agree that it is not possible to decide how much evidence is enough evidence to justify a belief, or warrant acceptance of a hypothesis, without reference to an individual’s interests, or values. For the inductive risk theorist, that’s because, as Kukla puts it, “there is no standard inherent in the evidence itself about how to balance inductive risks” (48, 2014). I make a similar point when I state that evidence, on its own, can never tell us how much evidence we need to justify a belief that p, or whether p is worth inquiring over, in the first place (9, 2014). That claim is the basis of my criticism against strict evidentialism (my main target in (2014)) and the motivation for my argument in favour of an interest-relative theory of justification.
In my discussion of interests, I focus on the epistemic risk we take in believing that p is true, given our interest in p. But Kukla argues that there are, in fact, two kinds of risks that we take when considering that p. There is what the inductive risk theorist calls a ‘type one’ error, which is the risk we take in accepting a hypothesis that is false, and there is also what is called a ‘type two’ error, which is the risk we take in rejecting a hypothesis that is true (48, 2014).
Because the main job of a theory of justification is setting the conditions for evaluating the normative status of beliefs, type two errors get overlooked in an account like mine, since they do not result in belief. Still, this elaboration of the kinds of risk inherent in decision-making in science adds a welcome complexity to our understanding of belief-acceptance more generally. My hesitation in believing that Federer lost his match is due to the disappointment I will experience if this turns out to be true. Because I care about the truth of p, I ought to be equally mindful about overcautiously rejecting p as I am in hastily accepting it. And while a theory of justification is not strictly concerned with those beliefs that we reject (this matter might better be left for a virtue theoretic account of epistemic character), our interests set the appropriate bar for accepting evidence, and this, as Kukla rightly says, “is a matter of trading off risks rather than being lax or stringent about minimizing them” (49, 2014).
According to Kukla, the interest-relative theory of justification has more or less got this right. If S tells me that p I need to decide if her word is sufficient evidence that p, given my interest in p. But here is where things get messy, Kukla argues, because S, in turn, “will have necessarily drawn on her own interests in order to set her own evidence bar that p” (49, 2014). And this, says Kukla, triggers a vicious regress, because in deciding what to believe we have to be able to tolerate the epistemic risk we take in counting on other people and their interest-driven evidence bars, “reaching back effectively ad infinintum” (50, 2014). As Kukla sees it, this is a problem for our everyday informal knowledge, and it is particularly acute in contemporary science, which is what she calls “radically collaborative” (50, 2014).
Collaboration and Justification
In recent years, philosophers have taken note of the collaborative nature of contemporary science, which poses a challenge to the traditional idea of the modern knower as individualistic and self-reliant. As John Hardwig argues, contemporary science highlights our epistemic dependence on others (1991). As Hardwig explains, a good portion of scientific research is done by individuals in research teams, and it is not unusual for data to be gathered by many dozens of researchers working independently on different aspects of an experiment or theory (694-695, 1991).
Kukla offers a snapshot of this phenomenon from the biomedical research world, in her “’Author TBD’: Radical Collaboration in Contemporary Biomedical Research” (2012). She describes a 2011 issue of the New England Journal of Medicine which has 5 original research articles, each one authored by between 6 and 27 individuals, most written on behalf of larger research groups, with links to supplementary appendices listing hundreds more collaborators, and so on (847, 2012).
Hardwig is looking at the epistemic division of labour in contemporary science in order to explore the role of trust in knowledge. Kukla is interested in a different problem; her worry is not so much trust, but group accountability. As she puts it, “the epistemic division of labor in contemporary medical research is decentralized, complex, and deeply messy, and no one is accountable for the ways in which interests shape the research process” (857, 2012). This is a pressing concern in the case of contemporary science.
If research projects are radically decentralized, stretched out over hundreds of collaborators, then who, we might well wonder, is accountable for the study as a whole? And if labour is distributed among researchers who are basing their decisions using varying sets of interests and thus evidential standards, then we should not be surprised to find varying methodological standards at play within the research results. Kukla makes this worry explicit in her “Accountability and Values in Radically Collaborative Research” (2014), with Winsberg and Huebner. In that paper they argue that “This opaque fracturing of interests seems to undermine the possibility that anyone can vouch for there being, somewhere or somehow, a coherent justificatory story to be told about how the study was designed and implemented, or what process let to the reported results.” (20, 2014).
This ‘opaque fracturing of interests’ nicely illustrates Kukla’s worry about a vicious regress in the case of theory acceptance in science, which is decentralized and hence distributed among dozens, if not hundreds of collaborators. As Kukla says, “The values and interests that shape a research project may thus be buried so far down that they are impossible to retrieve (856, 2012). But the question of how to account for the overall objectives of a radically decentralized research team is unlike the question posed by a theory of justification, which is centered on whether one’s own intellectual house is in order.
A theory of justification sets the standard for evaluating whether we have been suitably careful in forming our beliefs insofar as our overall aim is to have a true and consistent set of beliefs. Different theories of justification offer up different standards of evaluation. On an interest-relative theory of justification, that evaluation is based on whether an individual has sufficient evidence for her belief that p given her interest in p, that is, given the epistemic risk she takes in believing that p is true. Certainly, the evidence we draw upon to justify our beliefs often depends on the intellectual labour of others, on their competence and sincerity, as influenced by their particular interest-driven evidence bars. But there is no threat of a regress here. Indeed, the only real threat is relying on someone who turns out to be unreliable, and thus winding up with a false (if justified) belief.
Let’s return to beliefs based on testimony. If your word that p is sufficient evidence for my belief that p, given my interest in p, then my belief that p will be justified based on your say-so, whether or not you are a competent (or sincere) teller. If you are justificationally lax, if your standard of evidence with respect to p is lower than it ought to be, for whatever reason, then I might end up with a justified false belief that p. If, on the other hand, your standard of evidence with respect to p is high, if you are a competent (and sincere) teller, then I will likely end up with a justified true belief that p. Either way, so long as I have met the evidential standards set by my interest in p, then, according to an interest-relative theory of justification, my belief that p will be justified.
Take the Federer example. Suppose that I simply can’t wait to get to my computer to verify the potentially disastrous news about Federer, such that on my drive home from the grocery store I decide to call my partner who confirms that, indeed, Federer lost his French Open quarterfinal match earlier that day. My belief that p is now firmly entrenched, to my enormous disappointment. And it is also justified, given that the evidential standard set by my interest in p is now cleared.
Now, suppose that p is false. Suppose that the woman at the grocery store and my partner both got it wrong. Maybe the woman at the grocery store, who doesn’t herself care much for tennis, was just repeating something that she had heard someone else carelessly report. And maybe my partner, who thinks my devotion to Federer is excessive, decided to play a bad joke on me. Does the sincerity and/or competence of either teller compromise my justified belief that p? Not at all, according to an interest-relative theory of justification. This is a bit of bad luck, to be sure, but bad luck does not undermine the epistemic labour that I have exerted in order to ensure that I have sufficient evidence in favour of p, given my interest in p, nor does it undermine the rationality that I display in believing that p, given my reasons for p. Thus, while it is true that in the case of testimony what we believe depends on the word of others and how truth-motivated they are, this poses no threat to the question of whether one’s own intellectual house is in order.
There is an analogy to be made here with Foley’s demon world hypothesis (1985), which I discuss in (2014). Imagine two individuals with the identical mental content, but one exists in this world and one exists in a demon-world. In the demon-world the individual believes, remembers, and experiences just what he in this world believes, remembers, and experiences, but in the demon-world the evil demon has insured that all of his beliefs are false (189-190, 1985).
Foley uses this thought experiment to illustrate the internalist (or anti-reliabilist) intuition that if two individuals have the same subjective experiences then to the extent that one of them is justified in his beliefs, so is the other, even in the case where one of the individuals’ beliefs are all false. After all, as Foley notes, being tricked by a demon does not make a person less rational; it just makes her unlucky (190, 1985). The same goes for beliefs based on unreliable testimony, according to the interest-relative theory of justification. So long as a hearer meets the standard of evidence with respect to p set by her interest in p, then her belief that p will be justified, regardless of the varying interests and evidence standards of others.
Clearing the Evidence Bar
These varying interests and evidence standards threaten to trigger a regress only when we move away from the normative requirements for justified belief to the accountability of the overall goals, theory-acceptance, and decision-making of a radically decentralized research group. Kukla is thus wrong to suggest that: “It’s not enough we set our own evidence bar in accordance with our interests” (50, 2014). That is precisely just what we must do to meet the normative standards given by an interest-relative theory of justification. The problem of group accountability that is threatened by the collaborative nature of contemporary science is simply not, contra Kukla, a problem when it comes to our everyday informal knowledge.
Douglas, Heather. “Inductive Risk and Values in Science.” Philosophy of Science 67, no. 4 (2000): 559-579.
Fantl, Jeremy and Matthew McGrath. “Evidence, Pragmatics, and Justification.” The Philosophical Review 111, no. 1 (2002): 67-94.
Foley, Richard. 1985. “What’s Wrong with Reliabilism?” Monist, 68: (1985): 188–202.
Freedman, Karyn L. “Testimony and Epistemic Risk: The Dependence Account.” Social Epistemology (2014): 1-19. doi: 10.1080/02691728.2014.884183.
Hardwig, John. “The Role of Trust in Knowledge.” Journal of Philosophy 88, no. 12 (1991): 693–708.
Kukla, Rebecca. “’Author TBD’: Radical Collaboration in Contemporary Biomedical Research.” Philosophy of Science, 79 no. 5 (2012): 845-858.
Kukla, Rebecca. “Commentary on Karyn Freedman, ‘Testimony and Epistemic Risk: The Dependence Account’.” Social Epistemology Review and Reply Collective, 3, no. 11 (2014): 46-52.
Rudner, Richard. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of Science 20, no. 1 (1953): 1-6.
Winsberg, Eric, Bryce Huebner and Rebeeca Kukla. “Accountability and Values in Radically Collaborative Research.” Studies in the History and Philosophy of Science Part A, 46 (2014): 16-23.
Categories: Critical Replies