Author Information:Crispin Sartwell, Dickinson College, firstname.lastname@example.org
Sartwell, Crispin. “Anti-Social Epistemology.” Social Epistemology Review and Reply Collective 4, no. 6 (2015): 62-75.
Please refer to:
- Sartwell, Crispin. “Fight With Your Friends About Politics.” The Atlantic. 12 August 2014.
Image credit: Raul Lieberwirth, via flickr
On Saturday, January 8, 2011, Jared Loughner shot Congresswoman Gabrielle Giffords and a number of other people in a grocery story parking lot in Tucson, Arizona. By the next morning and over the next several days many pundits and politicians on the left connected the shootings—often in a tone of complete certainty—to the angry rhetoric of the right, in particular to that of Sarah Palin. The pundits and politicians on the right, as one would expect, denied these claims vehemently.
In this situation, I will argue, and in any situation in which opinions about a question of fact split along partisan lines, the only rational response without an independent assessment of the evidence is to regard everyone on both sides of the debate as having little credibility. Now that was obvious in this case, in which all the commentators seemed to take a position on the factual question before any of the relevant evidence emerged, in virtue of their group membership. But it would be true also in a situation in which there was very rich relevant information which was not dispositive, but in which opinions on the factual question broke down along partisan or many other sorts of social lines. To mention another example from partisan debates of the same period: as Giffords started her recovery, Republican members of Congress pushing for repeal of the Obama admninistration’s healthcare law all claimed in unison that the law would increase the deficit; all the Democrats held that it would decrease the deficit.
In the course of examining such cases, I will give something like an a priori argument that—quite surprisingly, I believe—gives us reason to suppose that people who disagree with the consensus of their own group—for example, their political party or portion of ideological spectrum—on any matter of fact, are likelier to be right than those who constitute the consensus.
One reason that the results of the argument that follows are surprising is that it might seem obvious that, in the absence of further information, a position that is a consensus among your own group is just as likely to be based on evidence as its negation. Indeed I suspect that most people adopt the working epistemic strategy that they ought to accord a consensus of the people around them or like them considerable weight, or roughly that a factual proposition most people believe is likelier to be true or likelier to be better justified than its negation. This may be an important source of social unity, but it is exactly the wrong way to figure out whom to believe and hence how to find the truth.
You might think that it would be a great good thing if we were all united in the same beliefs, if we could forge a real consensus and hence a real unity. Well, that might be desirable for many reasons, but it would be the end of the human quest for truth. The more we approach the condition of unity of belief, all things being equal (hereafter abbreviated as ‘ATBE,’) the less likely it is that our beliefs are based on evidence.
Using ‘belief’ in the sense of the cognitive state of a particular person rather than the content of what is believed, I will term a belief that is generated at least partly in response to an assessment of the evidence an evidence-sensitive belief. We might also require for a belief to be evidence-sensitive that it would be revised or dropped if enough evidence were developed suggesting that it was false. And let us speak of beliefs that arise entirely independently of evidence or entirely from causes other than an assessment of the evidence evidence-arbitrary beliefs. An evidence-arbitrary belief, like any belief, can be dropped or revised, though one would say typically not because the evidence stacks up against it. Myriad ambiguities arise with regard to these ideas, but rough notions are all I need for present purposes. Or so I hope.
I think it is a pretty good cognitive rule of thumb (if not a matter of definition) that evidence-sensitive beliefs are likelier to be true than evidence-arbitrary beliefs. Actual evidence for a claim must tend to show that the claim is true, ATBE; it must bear positively on an assessment of truth. (Indeed, I would construe ‘evidence for p’ as picking out all and only the material that tends to show that p is true.) This is why we value evidence, or demand it when the claim being made is important and doubtful: because we are trying to establish whether or not the claim is true.
Consider some assertion of fact, and stipulate that with regard to the available evidence it is as likely to be true as untrue, or that the evidence is equally split. It could be ‘The health care bill will add to the deficit’ or it could be ‘There is extra-terrestrial intelligent life’ or whatever you like. Now if people formed opinions on such matters based on the state of factual evidence, we would expect the people who agreed and people who disagreed to be relatively evenly distributed across some sorts of groups: those groups membership in which is random with regard to access to the evidence.
For example, education level would not be random as to access to evidence (it would not be evidence-random) if more highly-educated people had access to more or better evidence bearing on a particular issue. Ph.D.s in mathematics have more credibility on the question of whether some theorem has been proven, for example, than members of the group of non-Ph.Ds, because many of the former are in a position to assess proposed proofs, and many of the latter are not. That is, in relation to a question like that, the group ‘Ph.D.s in mathematics’ is not evidence-random. Members of a group whose members have experience that is relevant to a question have more credibility, ATBE, than not: ex-prisoners on prison conditions, for instance, or guitarists about how hard that solo is.
On the other hand, it would surprise us if a much higher percentage of green-eyed people than non-green-eyed people, or gardeners than non-gardeners, or people who lived within 1,237 yards of a major river than people who didn’t, or dog-owners than cat-owners, believed that Palin influenced Loughner or that there is extra-terrestrial intelligent life or that Fermat’s last theorem has been proven. Membership in such groups would seem to be random with respect to access to the evidence on these matters. It might turn out that people near rivers actually do have access to information that others lack. But there is no reason to think so now.
If it were true that most green-eyed people believed that Palin influenced Loughner and most non-green-eyed people did not, that wouldn’t show of any particular person in either group that her belief was evidence-arbitrary. But the differences between the groups cannot be accounted for by differential access to the evidence, because eye color is random with respect to access to the evidence. (Or seems so, as far as we can tell; let us suppose that as good an examination of the question as we can accomplish turns up no reasons to think that ‘green-eyed people’ is anything but an evidence-random group.) That your eyes are green doesn’t itself give you access to information on this matter that non-green-eyed people lack. Indeed this might and ought to make you wonder whether the contrasting distribution of opinions among non-green-eyed people is also responsive to factors that somehow have more to do with their eye-color than with the evidence.
One might speculate that there is some biological or genetic difference that correlates with eye color that explains the difference of opinions, for example, or that people with differently colored eyes are somehow treated differently socially in a way that at least partly causes them to believe as they do, or that the green-eyed people are disproportionately clustered in blue states or Ph.D programs in mathematics for whatever reason.
Now imagine a situation in which all the leftists think that there is extra-terrestrial intelligent life and all the rightists think there is not. We would find that odd. The groups ‘leftists’ and ‘rightists’ seem obviously random with respect to access to evidence for extra-terrestrial intelligent life. There’s nothing keeping the most hidebound reactionary Republican from seeing the aliens, if any.
For purposes of this thought experiment, let us suppose that a good assessment of the evidence for extra-terrestrial intelligent life makes the probabilities .5 either way, that the evidence is exquisitely balanced at 50/50. If everyone’s belief were evidence-sensitive, and since the political groups are arbitrary with respect to access to evidence, we’d expect a 50/50 split within each group among people forming an opinion. And so we can provisionally estimate, in terms of probabilities, before asking anyone anything about why they believe what they believe, the evidence-arbitrariness of the beliefs generated within each group by the distance from this 50/50 split. In this case, we should infer that there’s at least a .5 probability, with regard to any person in either group, that that person believes what she believes because of factors other than the evidence, or that her belief is evidence-arbitrary. This entails that with regard to any particular person you are talking to on either side of the issue (and, as always, ATBE) there is at least a 50% chance that that person is applying ways of deciding what to believe to which the evidence is irrelevant.
Let’s formulate this as a rule for assessing people’s credibility. I’ll call it the Point Five Principle:
All things being equal (ATBE), if the evidence is split, there is at most a .5 probability that a belief about a factual matter of someone who agrees with a consensus of any evidence-random group to which she belongs is evidence-sensitive.
If the initial probability that your position on a single factual issue is responsive to the evidence is at most .5, then that probability multiplies when the issues are multiplied. That is, if your belief that the healthcare bill will reduce the deficit correlates with that of your friends or your demographic segment or your political party, and so does your belief that Palin influenced Loughner, then provisionally we should infer that the chance that your belief that the healthcare bill will add to the deficit and that Palin influenced Loughner is evidence-based is at most .25. Add, for example, the view that climate change will put New York City underwater by 2025, and you’re at .125 and so on. (I am supposing that the evidence is perfectly equivocal in all these cases. Obviously it is not, necessarily, but this assumption will be discharged in a moment.)
At the conjunction of ten shared beliefs, the probability that the conjunction of the beliefs is evidence-sensitive is less than .0005. With regard to the next specific belief we are trying to assess, the probability remains .5, of course. But if we are assessing a person’s credibility in terms of whether their belief-system is evidence-sensitive, or whether the evidence has anything to do with how that person forms beliefs, you very quickly get to the point at which that is vanishingly unlikely.
Now let me acknowledge that this way of accumulating probabilities will be controversial. But very roughly—since in the real world this is far rougher than in the thought experiment—if we are in the realm of assessing a person’s credibility, every such agreement of the person’s belief with that of her evidence-arbitrary group must bring further into doubt the evidence-sensitivity of the techniques by which that person generates beliefs. If each such belief were responsive to evidence, every additional case would appear to be another coincidence, and after a number of such cases, the coincidence is stunning.
An important additional limitation on the Point Five Principle as it accumulates across a person’s belief-set is that we should say it does so only in the case of evidentially autonomous beliefs. So for example people have webs of evidence wherein the justifications would be inter-dependent or otherwise networked. That the average temperature of the earth will rise by five degrees this century and that billions will be devastated may well be hooked together in a series of mutually dependent beliefs, perhaps with many others. But according to our real practices for assessing relevance, those beliefs are sealed off more or less entirely from the evidence for whether Palin influenced Loughner. Bring one body of beliefs to bear on the other and you are free associating or experimenting in the power of non sequitur.
A premise of the argument is that one’s political affiliation is random with respect to access to the evidence for the factual claim that Palin influenced Loughner or that the health care bill will add to the deficit. But that is obvious. The situation is precisely the same with political parties as it is with green-eyed people or gardeners. Such political commitments as ‘we ought to help the poor by government redistribution of wealth’ or ‘supply-side economics is true’ or even ‘political discourse should be more civil’ really do not bear on whether Palin influenced Loughner. If every other doctrine of American conservatism were true, that would provide not an iota of evidence that Palin did not influence Loughner, and if every other doctrine of American progressivism were true, that is no evidence that she did.
What counts on the question of whether Palin influenced Loughner would be items like this: Loughner had Palin’s crosshairs map tacked up on his wall, or habitually Tivoed Sarah Palin’s Alaska, or left a note saying “I did it because I was trying to do what Sarah Palin wanted me to do.” That illegal immigrants should have a path to citizenship, or that climate change is a crisis, is neither here nor there. What counts on healthcare is the best available statistical information. That the Second Amendment creates an individual right of gun ownership or that America is a Christian nation is entirely irrelevant.
The factual claim can only be rationally assessed independently of the whole of your own political position, like the claim that there is extra-terrestrial intelligent life; your political commitments simply do not bear. The people who think that an assessment of the truth of the claim that Palin influenced Loughner or that the healthcare bill will add to the deficit should be connected to their position on tax policy or immigration or gay rights are not only not using evidence-sensitive belief-generating standards; they are explicitly endorsing evidence-arbitrary standards, as is anyone who thinks that the consensus of those around him is usually a good guide for what to believe about the actual world. I take it that the latter claim has now been entirely exploded.
And though it is an important epistemic principle that we would like to have a coherent belief set, it is also important even within a particular web of linked beliefs to examine them as autonomously from one another as possible, insofar as it is possible to have a coherent set of beliefs that is false. One does not want to be lured into ever-greater mistakes. Even in such cases one might particularly want to press critically on beliefs that are shared as a network among members of your own groups.
If most people share many of the consensus beliefs about factual matters among their evidence-random groups, then most people have negligible credibility on factual matters. To the extent that a rightist or leftist, a kindergarten teacher or a philosophy professor’s beliefs correlates with that of people in their own evidence-random group on factual matters, they ought to be regarded as having little credibility, where we are not in a position to assess the evidence independently. Their views are extremely unlikely to have been generated by examination of the evidence, whatever they may claim.
Developing the Point Five Principle
The principle can be adjusted or elaborated for cases where the evidence stacks up one way or another. In proportion as it does, there is a better chance, ATBE, that the belief of a particular member of the group that reaches consensus on the belief that has the weight of the evidence is evidence-sensitive. Of course, one thing that is at stake in at least some of the arguments between such differing factions is precisely whether the evidence does make one way or the other. That is itself a factual claim on which the consensus of evidence-random groups has a curtailed credibility.
When the evidence is in fact overwhelming (a political case might be the claim that Obama is an American citizen), of course, the inference from group membership to evidence-arbitrariness does not go through with regard to the portion of the political spectrum that believes it. And obviously the members of the other group are overwhelmingly likely to have an evidence-arbitrary beliefs and thus to be, to that extent, on average irrational. If the evidence is preponderant in one direction, then the provisional inference to evidence-sensitivity is preponderant in the same proportion with regard to the members of the group that takes the more probable position.
If the evidence was split 70/30, then we ought to expect that split within evidence-random groups. Here we really ought to depart from the mathematical style, which in any case is only a device for making the basic point clear. We would expect evidence-sensitive people to have access to different portions of the evidence or to weigh it in different ways. Such things really depend on what evidence is actually available to whom, how it is presented, and so on. But the style of inference would go through even if it wouldn’t be as devastating for the side group that has the preponderance of the evidence. Nevertheless, as more cases where you agree with your evidence-random group accumulate, you’d get very small very quickly even in cases like this.
Similar adjustments could be made to deal with non-unanimous groups, for example a case where 85% of the leftists think there is intelligent extra-terrestrial life and 78% of the rightists think not.
Even with such caveats, it follows from the Point Five Principle that the most partisan voices—the most consistent advocates of the consensus factual positions of their political or social group—ought to be regarded as the voices with the least actual credibility. Even without the argument, this is intuitively obvious. A spokesman for a political party, or a professional advocate of left-wing or right-wing causes, for example, has little credibility because one could predict from his group membership and role—that is, independently of the evidence—what he thinks about global warming, or Palin, or guns, or whatever it may be. His opinion—or at least his advocacy—is a social strategy rather than an attempt to say what the world is actually like. No factual claim made by a spokesman for a political faction and which comports with a consensus of her own group should be taken seriously. The spokesman’s credibility is infinitesimal, with each claim he makes doubling it down to a lower infinitesimality.
So one good result of a universal awareness of the Point Five Principle is that we would all stop listening to such people on factual matters. That a factional spokesman—even or precisely a spokesman for one’s own faction—says it should cause no truth-seeker to think it’s any more likely to be true, or to give it any more credence. However, the Point Five Principle should also provisionally undermine the credibility of many of the ordinary beliefs of sincere people. Factual beliefs shared among members or income groups, or racial groups, or cliques, or neighborhoods, or professions (for example, the group ‘professors’), where membership in such groups provides no special access to evidence, should be regarded as incredible. One useful task for the principle would be to let it make you doubt that there is good evidence for those of your own beliefs on which there is a consensus among the evidence-random groups to which you belong. In general, people are almost unbelievably bad at doing that.
Now we could try to get in and measure each person’s credibility independently, to really try to see why they believe what they believe, and that might contradict the initial assessment that the belief is evidence-arbitrary. It is compatible with the Point Five Principle that some particular political spokesman believes what he believes because he is responsive to evidence. But if we don’t have the time or means to try to make such an independent assessment—as most of us do not on most such questions—it may be a key part of our epistemic lives to assess the credibility of various people, or to choose whom to regard as an authority, as someone who’s likelier to be right than wrong.
In the absence of an independent assessment of the evidence, the only sensible initial approach to a population—of pundits or political consultants, for example—in which all the leftists, more or less, think that the health care bill will add to the deficit, and all the rightists think not, is to believe that for each such person there is at least an even chance that her belief is not responsive to evidence but to factors irrelevant to the truth of the assertion. A few more conventional declarations, and it is entirely obvious that the commitment is not to the evidence. But there are many non-politicians in many arenas who more than deserve the same treatment. ATBE, you should regard as credible people whose positions do not crystallize the consensus of their groups, but those who question or negate that consensus.
This result will stand, I think, almost no matter what your views about truth are, short of the pitiful view that truth just is whatever people like you think it is. Indeed, let us consider a position that holds that such concepts as ‘evidence’ and ‘truth’ are social formations, or linguistic practices. But our own actual practices entail the Point Five Principle. So, for example, if a judge convicts someone of murder, and it turns out that the cause of his belief that the defendant was guilty was his own political position or the fact that he is a gardener and the defendant is not, for example, we would hold that the the judge has not done what he has sworn to do and has not decided the case on the evidence, which is what our legal practices demand.
A person who thinks that Obama is not an American citizen because he heard it on Rush Limbaugh, and he likes Rush’s position on healthcare, is failing in the same way. That this demand can be very hard to meet or that it is violated in very many actual cases does not mean that it is not actually entailed by our real practices with regard to concepts like ‘truth’ and evidence.’ This is equally true of each of us in cases where we have a sincere desire to arrive at the truth, though of course on some matters or occasions that may not be the goal. The Point Five Principle follows from our actual practices for deciding what to believe about factual matters, the norms we actually do deploy right now in our very own culture for determining what to believe about matters of fact. People who believe because they are leftists that the healthcare bill will not add to the deficit are failing miserably by the standards of their very own culture.
Now many leftists claim that their procedures are more rational or evidence-based than the right’s, or perhaps that their political opinions are more informed. In fact, liberalism does correlate to some extent with education levels. But then the question is whether education levels themselves correlate with access to information that does bear on this particular matter. That would not be the case with Palin and Loughner, a case in which the facts, thin as they were, were presented in all media simultaneously to the whole population. In the case of the healthcare, numeracy and other skills would be necessary for people who really wanted to try to examine the evidence directly or independently, so that the grouping might not be evidence-random.
But in order to show correlation between educational level and credibility, it would be important to show that the primary relevant effect of education with regard to such issues is real access to real evidence and not, for example, the re-enforcement of group affiliation. The socializing function of education compromises its claims to be concerned with truth. And educational level is potentially relevant only to those who really do go out and try to crunch the numbers, or who form a belief by an actual attempt to assess the evidence independently, rather than by deciding whom to believe. Very few people other than specialists actually do this on a question like that, and for very good reasons: it would be a time-consuming, laborious process, and many such factual questions arise constantly.
The technique actually being applied by the vast majority of people who form an opinion on such matters, whatever their education level, is to try to figure out who speaks credibly: who knows or is more likely to be speaking the truth. Listening to or believing what people say is an indispensable part of our epistemic lives, a central strategy for our conduct as believing creatures. If we ourselves have any commitment to believing what is true rather than not or to being ourselves credible in communication with others on a particular matter, then we need to assess people’s credibility sincerely, as best we can. What I am suggesting is that a very sensible first step would be to spitball the a priori probabilities, and that, surprising as it may sound, we certainly can, in many actual cases.
Credibility as Evidence-Sensitive
In general the term ‘credibility’ refers to the factors that get people to trust someone epistemically or ordinarily to regard the fact that that person endorses some proposition as a reason to believe it themselves. It is often a matter of such things as the way one dresses, one’s eminence in one’s line of work, one’s educational attainments, awards, publications. In some situations, the people with credibility are the white people or the men or the rich people, for example, and actual social dominance partly consists in credibility or entails it. But the notion of credibility I am working with here is a matter of evidence: the credibility of a given person in the sense I am using the term is the extent to which what that person believes is evidence-sensitive. Because evidence must be truth-relevant, the beliefs of a credible person in this sense are likelier to be true, ATBE, than those of a non-credible person. This can often be a matter of experience, and someone who has experienced poverty has more credibility on the question of what it’s like to be poor than someone who has not. A person who has been a teacher has more credibility on certain questions or aspects of education, and so on.
Let me note that people have other desires than to be rational, as well they should. It is redundant to condemn a person as irrational who declares that she has no reasons for her belief and wants none, or that her belief radically exceeds her evidence or contradicts it, but that she intends to go right on believing it anyway. This, I think, is a legitimate conscious irrationalism, and not really subject to logical or epistemological critique. And if one wants to believe what one’s friends or colleagues believe and one is not particularly interested in truth as it may exist externally to that context, or if in matters of factual belief one is more interested in group solidarity than having good reasons, I would not regard that as illegitimate. But then I would strongly suggest that one should stop claiming that one believes because of the evidence or that one’s position is more rational than that of one’s opponents, or that it should have the effect of convincing others. That is, it would help to have a clear sense of how and why one comes to believe, or to express that as reflectively and honestly as possible.
It might seem that the .5 hit that a consensus opinion takes at the outset shouldn’t be a matter of concern to those who find themselves under that cloud. Maybe all they need is .51, a tiny weight of evidence which might easily be established in particular cases to which the Point Five Principle applies. But in fact it is devastating. Anti-consensus opinions face no such a priori handicap: before looking into the context or even the actual evidence, you should assume that there’s at least a 50% chance that the opinion of any particular person who takes up the consensus opinion of his evidence-random group is not evidence-sensitive. The Point Five Principle is an extremely destructive credibility bomb. The person who speaks for the conventional factual views of an evidence-random group to which she belongs, even with regard to a single issue, starts out handicapped with regard to credibility, as though she were competing in the Olympics with one arm and one leg tied behind her back, or after having had a stroke and losing control of the left side of her body.
Now of course, actually dealing with an individual from either group, or reading her work, or finding out the actual way she wields evidence, you might revise these estimates and accord her much more credibility. If the examination comes out right, you should accord her more credibility, if your own beliefs are evidence-sensitive. But before or in the absence of such an examination, the only rational thing to do is cock an eyebrow.
The Social Destruction of Knowledge
Groups are rarely entirely unanimous. The correlation between peoples’ political positions and the belief that Palin influenced Loughner was not perfect. There were, one hopes, a few leftists who thought not, and a few rightists who thought so (as well as some on both sides, and many more on neither, who suspended judgment until more facts came in). In the absence of independent empirical evidence bearing on the matter, it is sensible to regard such outliers with regard to their own group as having more initial credibility than anyone else in the population. The outlying beliefs are not undermined by this argument, or indeed by any similar strategy that I can see, whereas the consensus position starts out undermined by at least half. Putting it mildly, this is a substantial initial advantage for anti-consensus positions, or more precisely for the credibility of the people who take them.
It does not of course follow from the fact that they deny the consensus position of their own group that the outliers are actually applying an evidence-sensitive procedure rather than hallucinating or expressing their resentment toward other people in their group, for example. But what you couldn’t do with the outliers is apply the Point Five Principle. That is, people who disagree with the factual consensus of their own evidence-arbitrary group do not face disqualification on these grounds, or on any similar a priori grounds that I can think of, though of course they might on others.
For example, it might be the case that most people who take anti-consensus positions do it out of social perversity, a kind of knee-jerk rejectionism. Beliefs generated by such a procedure seem entirely evidence-arbitrary. But the claim that that is how the outlying beliefs were in fact usually generated would have to be shown empirically, by actually going and finding out, and I imagine that many factors cause people to run counter to consensus (including sensitivity to the evidence!). That does nothing to bolster the credibility of people who take the consensus view.
However, even an almost sheer social perversity could be evidence-sensitive in the sort of case where the socially perverse believer intuits that the members of the group, whatever they may say, do not believe in virtue of evidence, but in virtue of the value they place in group membership and the fact they use factual belief as a criterion of group membership. A person who disagrees or starts out skeptical with regard to the beliefs of his own group very likely thinks that the fact that all these people believe it is not itself a reason to believe it. In that sense, a slightly thoughtful sheer perversity is evidence-sensitive in a way that is not the case for the beliefs of someone who informally believes that the fact that the people around him believe something is a reason to believe it.
We should conclude from all this that the people with the most initial credibility on any assertion of fact are people who believe the reverse of what most of the people in their profession or political party, or even most of their friends and neighbors, believe, if those groups are evidence-random with regard to the question. The best first move in a case where you have not formed an opinion or on which the evidence is equivocal is always to critically examine and provisionally regard as irrational an opinion that is a consensus among people like yourself. That people usually do precisely the opposite—that they seek for the consensus opinion in their group and adopt it—is a demonstration of our profound irrationality. Thinkers sometimes talk about ‘the social production of knowledge.’ They ought also, or rather first, to talk about the social destruction of knowledge and the anti-social production of knowledge.
Considering Group Membership
Let me acknowledge that the actual situation on the ground is much, much more complicated than the simplified situations I have been considering. One set of complications arises from the fact that each person has multiple group memberships. So for example, the leftist economist who asserts that the healthcare bill will not add to the deficit belongs to a group that is not evidence-random with regard to the question, and to one that is. In fact, since I have not really tried to restrict what counts as a group, everyone belongs to many groups or even infinitely many groups. So I belong to the philosophy professors, the residents or Pennsylvania, the people who live east of Wyoming, and the people who either live east of Wyoming or on Mars. With regard to what group should we assess the credibility of a given individual?
Now this is difficult to come to grips with in a purely theoretical way. But I think that, at a first stab, we should focus on socially salient group memberships and groups with which the agent consciously identifies. ‘Socially salient’ here means that when people in the social world of the agent sort people into groups, they commonly do so on the basis of those categories. Evidence for the social salience of the group might include the way organizations poll, for example. So when Gallup wants to know what people are thinking, they often give us information as to the opinions of people who fall into different political factions, races, genders, income levels, regions, ages, religions. Those are some of the socially salient groups, and with regard to such groups each person of course has multiple memberships. These are important, but I also think it is important and plausible to apply the Point Five Principle with regard to much smaller groups: cliques, knots of friends, families, professional colleagues.
I am certainly not denying that membership in such groups does give access to certain kinds of evidence on certain kinds of matters. I am not saying that people should ignore their own experiences and the collective group wisdom that reflects that experience. For example, I think there are many truths about American racism that are known by many black people and few white people, or even that there are facts about American racism that cannot be known by white people in that they lack the experiences that constitute the evidence for those views. There are things women know that men don’t, and perhaps vice versa.
Membership in various demographic cohorts can be profoundly likely to make people sensitive to different sorts of evidence. That is why group membership is indeed epistemically important. But of course there are also matters on which your racial or gender identity obviously has no bearing. It is not likely that the black people in general have evidence of extraterrestrial life or global warming or Palin’s influence which is not available to white people. On such matters, these group identities are irrelevant, and epistemic loyalty to the group is distorting.
The reason we might want to focus on the groups with which the agent does identify is because membership in such groups is likelier than membership in others to have epistemological effects on the agent. For example, it is likely that a Democrat wants to agree with most other Democrats and accords Democratic spokesmen more credibility than Republican spokesmen.
If there is a factual consensus with which the agent agrees within any socially salient, evidence-random group to which the agent belongs, the Point Five Principle applies. One reason for this is that even experts often distort or misinterpret evidence to support the consensus of some group with which they strongly identify. So let’s say, for example, that there is a consensus among wealthy economists that that the most efficient means of distribution is free market capitalism, and a consensus among impoverished economists that a dictatorship of the proletariat is, where they could both accept the same concept of ‘efficiency,’ which of course they might not. (If they do not, then there is nothing they clearly disagree about when they disagree about whether a certain means of distribution is efficient.)
But then suppose that they have contrasting views about what the current distribution actually is (perhaps the wealthy economists represent the current distribution as more equal than do the impoverished economists). ATBE, the group ‘economists’ is not evidence-random with regard to the facts on the ground about the actual distribution, but income groups are. In such a case we must strongly suspect that the membership of these people in their evidence-random groups has distorted their views, and ought to suspect it just as strongly as we do with regard to people taking up the consensus of their political group with regard to whether Palin influenced Loughner.
A related problem is that if a progressive, for example, consistently takes up anti-consensus positions with regard to progressives, he has become a conservative. That is, in certain cases, and especially with regard to doctrines that are seen by members of the group as central to their group identity, you almost by definition cannot dissent and remain within the group. So if you see someone (a “conservative Democrat,” e.g.) consistently taking anti-consensus positions, at a certain point he is no longer a Democrat, as many actual cases (Joe Lieberman or Charlie Crist, for example) show. In this case he’s no longer an outlier among the Democrats, but an endorser of the consensus of Republicans. However, recall that the Point Five Principle applies only to questions of fact. So, for example, it does not apply with regard to doctrines such as that government should redistribute wealth by progressive taxation, a normative claim. But if the Democratic party were to start using as criteria for membership particular positions on factual matters, then of course you should wonder with regard to each Democrat whether their assessment of the factual situation was evidence-sensitive or simply enforced upon her as a condition of membership or taken up by her to underscore her membership.
At any rate, the ideal epistemic position of a passionately Democratic person would be this: she accepts most or even all the normative claims characteristic of her group. She thinks gay marriage should be legal, for example. She is strongly committed to economic justice. But this bears not at all on her factual beliefs, which will be random with respect to the consensus of the Democratic party. ATBE, she would be no more likely to agree with her group that Palin influenced Loughner than to believe she did not. That is really the only sort of Democrat to which we should attribute any credibility on factual matters without actually examining how the beliefs were in fact generated.
Obviously I am not going to be able to defend here in any elaborate way the claim that there is indeed a fundamental difference between factual and normative claims, or elucidate the various relations between them. I do think there is a fundamental difference, and one way to try to show this is to point out that different sorts of evidence bear on factual than bear on normative claims. What I want to point out is that your commitment to social justice just does not bear on whether Palin influenced Lougher or whether the healthcare bill will add to the deficit.
The most credible people within a given evidence-random group that takes up moral or political issues, that is, are people who accept the normative orientation of the group, but who do not permit that to affect their beliefs on factual matters. So again, look for someone who believes in the redistribution of wealth and gay rights and internationalism but who thinks that Palin didn’t influence Loughner or that climate change isn’t going to be the big disaster all her friends think it will be. That does not establish that her beliefs are evidence-sensitive or that those of her friends aren’t. But in terms of an examination merely of the distributions of opinions among groups, those are the only people who are capable of establishing a provisional credibility. The most credible people on matters of fact, ATBE, are people whose position on factual matters are entirely unpredictable from their membership in evidence-random groups, who are as likely to disagree as to agree. There seem to be precious few such people.
Consider two epistemic principles: (1) that a proposition on which there is a consensus among your group is, ATBE, likelier to be true, or likelier to be supported by the evidence, than one that is not. (2) that the negation of that proposition, ATBE, is likelier to be true or supported by the evidence. It is plausible that most people believe (1), or at least adopt it as a working assumption. That is, (1) is itself the consensus position. Virtually any group of people aside from professional philosophers is evidence-random with regard to epistemic principles. That means that we should provisionally assess the probability that (1) is evidence-based as at most .5. (2) does not face this devastating handicap. So until someone produces some independent empirical evidence that consensus positions are likelier to be evidence-sensitive, ATBE, than anti-consensus positions, we should assume that (1) is false. The fact that (2) is itself an anti-consensus position (a claim which I have substantiated informally by asking people such as members of my family and my students) gives it much greater provisional plausibility. That is, the Point Five Principle discredits its opponent and hence substantiates itself. Since we have seen independent reason to regard that principle as plausible, we ought provisionally to accept (2) on that basis.
Take the Opposite View
Very imprecisely, the opposite of what most people like you believe is more likely to be true than what they do believe, because it is initially more plausible that it is based on evidence. This is a remarkable conclusion! Now let me state it more precisely. That a position on a factual claim, when the evidence for it is equivocal, is a consensus position among a particular evidence-random group, tends to show that, within that group, the belief is based on factors other than the evidence. ATBE, this dramatically undermines the credibility of all the people in that group who take the consensus position. To the extent that evidence-sensitive beliefs are more likely to be true than evidence-arbitrary beliefs, this bears indirectly on the truth of the proposition in question.
This argument should make you re-assess the credibility of a lot of people, including yourself. In particular, it ought to make you doubt the credibility on factual matters of anyone with conventional factual opinions among the groups to which they belong, including political parties or advocacy groups. On the other hand, this argument ought to give everyone a renewed respect for skeptics, dissenters, and, in particular, people whose positions make them traitors to their own parties or social groups.
The epistemic theory of a dictator or the king who surrounds himself with yes-men and flatterers is implicitly that if all the people around you are saying it, then it’s likely to be true. And that is the same theory that the average person who is not a dictator seems to hold. The strategy helps all the people who seek and participate in a consensus of factual belief in an evidence-random group find the truth just as effectively as it does the dictator. The informal epistemic constraints of social groups, as well as the tendency of people to subordinate themselves epistemically to such groups even when they are obviously evidence-random, are barriers to the credibility of all concerned, and to the possibility of members of our species reaching truth.
We might point out that to the extent a group is in consensus on a given question, its members tend to regard the question as settled, and hence to ignore all further evidence. That is, a group in consensus, even if it starts out being extremely evidence-sensitive, tends to atrophy into one that is evidence-arbitrary or even evidence-averse. Its members are confused as between their solidarity and the truth.
If I were to give an informal account of why people usually operate in precisely the opposite way than the Point Five Principle suggests, I might say this: people replace the world with each other. They engage in an infinite round of epistemic backslapping in which the agreement of the people around them bolsters their confidence in their own beliefs and their sense of their own sagacity. This might enhance their self-esteem, but it cannot in the long run be anything but a continual source of unanimous error. With regard to most subject-matters, and whatever you might think about the Point Five Principle, this procedure is utterly detached from any actual examination of evidence. It is our great epistemological handicap, the greatest barrier between us and truth aside from our sheer finitude.
If it is impossible to achieve social unity without a consensus of factual belief, then we are in a position of having to choose between unity and truth. But of course various forms of unity do not presuppose unanimity; one can love someone with whom one disagrees politically, for example. So I would make a plea for doxastically open groups, groups to which people can belong even if they disagree with consensus opinions and not be held to be monsters or idiots, or even to be obviously wrong. There could even be groups—there have been groups—in which dissidents are valued or are heard rather than ignored, ostracized, or executed. That’s a very good idea, because a dissident, ATBE, is more likely to be right. If we hope to find the truth about anything, we need social formations with much less epistemological peer pressure and much more tolerance for or even enthusiasm for anti-consensus thinking among the members than is now typical.