Disagreeing and Getting to the Truth: A Reply to Sartwell, Jake Wojtowicz

Author Information: Jake Wojtowicz, King’s College London, jake.wojtowicz@kcl.ac.uk

Wojtowicz, Jake. “Disagreeing and Getting to the Truth: A Reply to Sartwell.” Social Epistemology Review and Reply Collective 4, no. 9 (2015): 46-50.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2im

Please refer to:

agree_disagree

Image credit: Tom Spaulding via flickr

I think it’s fair to say that Crispin Sartwell (2015) and I agree on something: it’s good to disagree. Why? Well because disagreement gets us closer to the truth. So, in that spirit, I’d like to offer some disagreement to some things he says in his “Anti-Social Epistemology” (the page references in brackets refer to this paper).[1] What I want to dispute is just how disagreement is meant to get us closer to the truth. I don’t think that Sartwell’s Point Five Principle works, and that is the focus of his paper, but I think the motivations behind it are spot on. 

Facts and Politics

Sartwell starts off by discussing the case of Jared Loughner, who shot Gabrielle Giffords; pundits on the left claimed that Sarah Palin’s rhetoric in effect caused the shooting, and pundits on the right claimed this was nonsense. Sartwell’s point is that when it comes to a factual matter—and it is purely a factual matter whether the behaviour of a firebrand on her soapbox actually caused a man to shoot a Congresswoman—one’s political views should be entirely irrelevant. As Sartwell points out, whether you and your political friends believe that we should have a free healthcare system, or that we have a duty to help out poorer foreign countries, is entirely irrelevant to purely factual matters like the Loughner case, whether a certain law will increase unemployment, or whether the figures suggest that the economy is growing or not.

Now, Sartwell qualifies what he’s considering. He’s considering purely factual matters, and wants to consider the social element of getting to the truth. So, he excludes people who believe what they believe because it helps them feel like they belong (70). If you want to belong to a group then believing as others do might help this, even if it means you end up believing a few falsehoods. Sartwell also acknowledges that there are many groups to which a person belongs. He thinks that we should focus on the groups with which we “consciously identify”, and “socially salient” groups, since this sort of group is more likely to have epistemological effects (as opposed to the group of those who live West of London or put their left sock on first) (72). So, we need to look at the epistemological effect of the groups with which we identify on our beliefs—when we’re after the truth.

Sartwell thinks there’s a useful principle to apply to group beliefs. He calls this the Point Five Principle. He asks us to consider a proposition that has good evidence either way: the probability that it is true is .5, and the probability that it is false is .5. So, if we assume that the evidence for “Palin’s behaviour contributed to Loughner shooting Giffords”, which we’ll call p, is 50/50 split and the evidence is equally accessible to everyone, then if everyone is responding well to the evidence, half the group will believe p, and half will believe not-p. But if it turns out that all (or almost all) of the lefties believe p, and all the righties believe not-p, then something’s up. Sartwell says “we should infer that there’s at least a .5 probability, with regard to any person in either group, that that person believes what she believes because of factors other than the evidence, or that her belief is evidence-arbitrary” (64).

So the expectation is that if they are just considering the evidence half will believe p, and half will believe not-p, so if it turns out that they all believe p, then we can assume that half would have believed p anyway by responding to the evidence, and that the other half believe p not because of the evidence, but because of what others in their group think. So with regards to any member of the group, there’s only a .5 chance that they believe it because of the evidence. Then Sartwell notes that if there’s only a .5 chance that someone believes that because of the evidence, then on another matter where the consensus all believes something, there’s only a .5 chance that she believe that because of the evidence, so the chance that she believes both things because of the evidence is .25, and so on, until we get to tiny numbers. Long story short: if you believe what your group believes, then the chance that all of these beliefs are responding to the evidence is miniscule, and you’re doing an awful job at getting to the truth, since the best way of getting to the truth is by responding to the evidence.

Sartwell goes on to nuance this, since rarely is the evidence split 50/50; but he says that if the evidence is split 70/30 then we should expect 70% of the leftists to believe p, and 30% to believe not-p (67). If it turns out that 86% believe it, then something like 16% won’t be responding to the evidence. Whatever the split in evidence, we can find out what chance there is that the group members are responding to the evidence by comparing how many believe it with how the evidence splits. So, everyone who goes with the consensus suffers a defeasible credibility deficit. On first glance, we should assume that someone who believes with their group on any 50/50 matter only has .5 chance of being someone who is responding to the evidence. If we investigate and find out he’s an epistemic saint, great, but he comes with a deficit and wears it on his sleeve because he agrees with his group and that group doesn’t seem to respond to the evidence.

Thus Sartwell thinks we should listen to dissenters. They may suffer all sorts of flaws but, at first glance, if someone goes against the consensus she does not suffer from this credibility deficit. So, “the opposite of what most people like you believe is more likely to be true than what they do believe, because it is initially more plausible that it is based on evidence.” (74).

Two Problems and the Point Five Principle

I have two problems for Sartwell. The first concerns just what we should believe based on the evidence; the second comes out of a slightly baffling result we get when we look at his view on dissenters.

It’s not clear to me that when there is a 50/50 split in the evidence, half of any group should end up believing one way, and half the other. Now, Sartwell cashes this all out in terms of full belief, not degrees of belief or credences or such like—I’m fine with that. But when the evidence is .5 it seems the epistemically sensible thing to do is just to withhold belief. There’s nothing concerning the truth of the matter that should incline you one way or another, and so you should remain ambivalent. You shouldn’t believe p, or not-p, rather you just shouldn’t believe anything either way. So if all of a group believe something, it doesn’t seem to me that there’s a .5 chance that they aren’t responding to the evidence, rather it’s closer to 1!

Consider something away from a 50/50 split. If the split is 55/45, or 65/35, we might still think that there’s not enough evidence to believe either way. Belief, in the way Sartwell and I cash it out, seems to be all or nothing, and these just aren’t good enough odds to go all or nothing on. But find a level of evidence that we think is acceptable for forming beliefs, perhaps 70/30 does it, or more likely 85/15. So the evidence makes it 85% likely that the proposition is true. Then, we should expect that, given the evidence suggests it is true, everyone should believe it. The evidence makes it very likely, so you shouldn’t withhold belief, nor should you believe not-p—you should believe p. So if one group believes it and another does not, then the group that believes it is doing much better. If, as is more likely, 95% of one group believes it, then it seems to me that the 5% of dissenters who believe not-p are doing badly since the evidence is such that they should believe—they’re far worse at responding to the evidence than the rest of their group, and a 15% chance is not the level of evidence on which we should base a belief.

So it’s just not clear to me that the Point Five Principle, as it stands, can actually get going. When it comes to something like a genuine split of the evidence, then anyone who believes either way is subject to a credibility deficit. If the evidence is much in favour of a proposition then you simply should believe it. So if the evidence is weighted 85/15, then a group where the entire group believes it is doing better than a group where 85% believe and 15% do not, since the evidence should compel belief in everyone: no one should believe something based on a .15 chance. Dissenters who believe against their group, then, might face a huge credibility deficit.

The last point I want to make is about dissenters, and perhaps it might serve as a suggestion for where Sartwell has things right, and where he might have got things wrong. So, let’s ignore the above issues and go back to a 50/50 case, and the idea that someone who is part of your group but who believes differently from your group is more likely to be right. Well, that means that if almost all of the lefties believe p, and almost all of the righties believe not-p, then someone on the right should think that a dissenter who believes p is more likely to be correct, and someone on the left should also think that a dissenter who believes not-p is more likely to be correct.

But the Point Five Principle is a general principle, meant to apply to anyone who wants to address the truth of the matter. Now, we’re talking about the same fact. So the leftie has to think that not-p is more likely, and the leftie has to think that the rightie should think that p is more likely. Now, either the leftie can think “ah, the rightie has this evidence: someone in his group dissents” and so can use that as evidence, or he cannot. If the leftie cannot apply this reasoning then Sartwell’s Point Five Principle suffers from parochialism at this level. But if he can then he is in the same position as someone external to the groups. From an external point of view I have to think that the leftie dissenter’s belief is more likely to be true (say that the chance of not-p is .6) and that the rightie dissenter’s belief is more likely to be true (say that the chance of p is .6). But that means that the chance of p is .6 and the chance of not-p is .6—and that’s totally bizarre and clearly something has gone wrong.

So Sartwell can’t make the claim that a dissenter is more likely to be correct. My first line of criticism showed that there’s good reason to think that dissenters are not responding to the evidence, since if the evidence inclines one way, they should go that way—and that is not what they have done; and when the evidence inclines neither way, anyone, dissenter or not, who believes has done so against the evidence. My second line showed that the claim that dissenters are more likely to be correct leads us, once we acknowledge that opposing groups will have opposing dissenters, into a bizarre situation where something is both more likely to be true and more likely to be false.

Agreement and Dissent

I don’t want to leave it there. From what I’ve said, it might seem like I’m a big fan of agreement, and not all that keen on dissenters. But I think they do us a great epistemic service. Sartwell might not be right in that dissenters are more likely to be right, but I think they serve a far more useful purpose: if we pay attention to them, we are more likely to be right.

As C S Peirce puts it, if we “see that any belief … is determined by any circumstance extraneous to the facts, [we] will from that moment not merely admit in words that that belief is doubtful, but will experience a real doubt of it, so that it ceases to be a belief.”[2] Instead, we want it to be such that our “beliefs may be determined by nothing human, but by some external permanency.”[3] When we want to believe something, we want to believe because it is the case. Now, believing something because of how I am doesn’t do this. So, if I end up believing something because of the fact I’m a leftie, I do not believe it dependent on whether it is the case, but dependent on how I am. Dissenters, especially when they’re otherwise like us, help to set the alarm bells ringing.

Dissenters should let us see where we have gone wrong, and where we might believe something that is determined not by the permanency, but by how we are. Dissenters don’t let their group memberships overly influence their beliefs. This seems to be what Sartwell is getting at when he claims that consensus beliefs show that “people replace the world with each other.” (75). People allow their own peculiarities, and the peculiarities of their groups, to determine their beliefs, when really it is the world that should play this role. That is why dissenters are useful. And that is why dissenters who are otherwise like you or me are even more useful. If I am a leftie and I see how a rightie dissents, it might show me that many righties believe as they do with regards to that matter because of how they are, not because of the facts. But if I see a leftie dissent, then it lets me see how I might be influenced by things other than the fact—I’m influenced not by how the world is, but by how I am.

But it isn’t clear that the way Sartwell has spelled it out with the Point Five Principle lets us see just when the group might have gone wrong, and when we should pay dissenters attention. Perhaps I’ve gone wrong somewhere, or perhaps Sartwell has. All I can hope is that a bit of dissent can get us a bit closer to the truth.[4]

References

Peirce, Charles Sanders. “The Fixation of Belief (1877), with Additions from (1893).” In The Philosophy of Peirce, edited by Justus Buchler, 5–22. London: Routledge & Kegan Paul, 1956.

Sartwell, Crispin. “Anti-Social Epistemology.” Social Epistemology Review and Reply Collective 4, no. 6 (2015): 62–75.

[1] Sartwell, “Anti-Social Epistemology.”

[2] Peirce, “The Fixation of Belief (1877), with Additions from (1893),” 18.

[3] Ibid.

[4] Thanks are due to David Galloway, Clayton Littlejohn and Chris McMullan for reading a draft of this paper.



Categories: Critical Replies

Tags: , , , , ,

2 replies

  1. Regarding your first problem: Sartwell says “I will term a belief that is generated at least partly in response to an assessment of the evidence an evidence-sensitive belief”. The ‘at least partly’ is important here. You may be right that, in absolute terms, anyone who does anything but suspend belief on a 50/50 issue is strictly speaking irrational. But I could consider all the evidence very carefully, and yet give just a bit too much weight to some bits of evidence because of ‘who I am’, and yet still have an evidence-sensitive belief in Sartwell’s terms.

    You, on the other hand, seem to characterise ‘responding to the evidence’ in terms of having *nothing but* the facts determine your belief. Aside from being a pretty unattainable target for most of us, that just doesn’t seem to be what CS has in mind. As such, it seems plausible to me that a bunch of evidence-sensitive people (in his terms) could split along political lines *despite* for the most part assessing the evidence pretty clearly.

    Maybe we avoid this outcome if we set the threshold for belief as high as you suggest, since then the evidence will point one way more dominantly, and it would require a greater bias to overcome that dominance.

    • Thanks Ben.

      Clearly we don’t just have access to a bunch of facts out there, we do need to actually respond to the evidence. We have to mediate things. And it’s there that how someone is might figure in an unproblematic way. But I’m not sure how I’d cash that out – though it would be important, I think, in forming beliefs about, say, colour.

      But if how someone is plays a role in the formation of their beliefs by penetrating to the level of skewing the evidence, then it seems they’re not forming beliefs in a totally evidence-sensitive way; it might be no great sin, and we all do it a lot of the time – but it isn’t purely evidence-sensitive, and so it would still be a mark against the claim that such a person is evidence-sensitive. At least, that’s how I see it.

      Even if we do allow some peculiarities in and still count this as being-evidence sensitive, then I’m not sure it would much help when it comes to 50/50 cases. If the evidence just goes equally well both ways, then a little bit of a natural tilt one way still shouldn’t push it to a level such that one should form a belief that way. And if it does, I’m not sure it’s just a little tilt. So, you’re right – if we set the threshold higher, we seem to allow a little tilt one way or another, but in that case the evidence is doing a lot of the work, too.

      I hope that helps, though I’ve perhaps not expressed this as clearly as I’d like.

Leave a Reply