The Market for Testimony: A Reply to Our Critics, Nicholas Tebben and John Waterman

Author Information: Nicholas Tebben, Towson University, ntebben1@jhu.edu and John Waterman, Colby College, john.waterman@colby.edu

Tebben, Nicholas and John Waterman. “The Market for Testimony: A Reply to Our Critics.” Social Epistemology Review and Reply Collective 4, no. 5 (2015): 43-51.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-268

Please refer to:

Designed by James H. McGill, opened in Jan 1875 - ca 1920, National Photo Company, via shorpy http://www.streetsofwashington.com/2010/09/washingtons-first-convention-center.html

Image credit: NCinDC, via flickr

In “Reverse Engineering Epistemic Evaluations,” Sinan Dogramaci develops a causal account of how communities coordinate epistemic procedures, which he calls ‘epistemic communism’. In “Epistemic Free-Riders and Reasons to Trust Testimony” we argue that the theory faces a free-rider problem, and that the causal mechanism of influence at its heart tends to undermine rational self-trust.

In their excellent discussion of these articles,[1] Graham et al. argue that both of our worries are insufficiently motivated. Against our free-rider problem, they argue we do too little to show that epistemic evaluation is genuinely costly. They go on to argue that we do not show that an inferential procedure’s etiology matters for whether an agent should trust it. 

Our interest in, and concern with, epistemic communism is motivated by our commitment to an exchange-based theory of testimony. Rather than simply listing our replies to Graham et al., we would like to take this opportunity to provide a very brief sketch of this theory, show how we can handle the concerns that Graham and his co-authors moot, and situate our response to them relative to the rest of our account.

Testimony and the Problems of Coordination

Each person’s epistemic reach is limited, both by perceptual limitations and by ignorance. Standing in the hallway we don’t know what is going on in the lab; and even if we could see into the lab, getting knowledge from what we would see would require knowing something about, for example, chemical theory, and the apparatuses that the chemists are using in their experiment. We remedy our epistemic deficiencies by relying on others. We ask the chemists what they are doing, and what they have found.

This system works well because, in the normal case, people ask questions when they need information that others have, and they answer questions that are directed to them. Answering questions, however, is a costly business. At the very least it takes time; for complicated questions, it might require more than that. If we ask a chemist what is going on in the lab, answering our question requires taking time out from the business of the lab; if we ask a particular question about chemistry, answering the question may be even more involved.

One of the fundamental questions, then, that must be addressed in the epistemology of testimony is this: “why would people answer questions?” If others, in the typical case, answer questions that are put to them, if we each need information that we do not have, and if answering questions is costly, it would seem that the thing to do would be to ask questions of others, and ignore questions that are asked of you. This is one kind of free-riding problem (call it a “1st-level” free riding problem), though not the one to which we drew attention in our “Epistemic Free Riders”, and not the one that Graham et al. addressed in their paper. [2]

Our answer is that there is a market for testimony, and that providing testimony gives one access to the other goods that are distributed through this market. In what sense is there a market for testimony? Information, like other goods, is subject to conditional exchanges. There are valuable things to which one has access, conditional on one’s provision of information. In a few cases this is relatively explicit: firms prefer to hire experienced workers not only because they already know what they are doing, but also because they can share this information with the rookies on the night shift. But in most cases it is not.

One reason that it is nice to have interesting friends is that they can tell you interesting things. There is value that comes with having a friend: companionship, social status, rides to the airport. And while there are, of course, many reasons to be friends with someone, insofar as the fact that someone is interesting is a reason to be friends with them, the provision of the goods of friendship is conditional on their provision of interesting information. This exchange is not explicit, nor even intentional: we would never cite the fact that he says interesting things as our reason for giving our friend Derek a ride to the airport, we would say, rather, something like “I like him”. Nevertheless, something valuable is being provided conditional on the provision of information. And that provides speakers with a reason not to be 1st-level free riders: a reason to answer questions, at least in the typical case, when posed to them.

Now, in order for there to be an equilibrium point at which communicative strategies (those in which one asks and answers questions) are widespread, speakers must also be fairly good at answering questions; if the answers that they give are useless, people will stop asking questions.[3] Typically, answers are useful when the speaker is competent in forming beliefs, and sincere in reporting them.[4]Dogramaci’s epistemic communism is an attempt to account for the competence of speakers.

His idea, in brief, is that if everyone uses the same belief-forming procedures, then everyone can be sure that the speakers who provide them with testimony are as reliable at forming true beliefs as they would be themselves, were they to occupy the others’ epistemic position.[5] (For example, if they were to have the same evidence and the same training.) We get this coordination because, he says, we each evaluate each others’ epistemic procedures, negatively when they do not match ours and positively when they do, and we react in accommodating ways to the evaluations of our fellows. Thus procedures that are given negative evaluations are less likely to be used in the future, and those given positive evaluations are more likely to be used. After some number of interactions, we end up using the same epistemic procedures.

In “Epistemic Free Riders” we claimed that this system is vulnerable to a free riding problem, a 2nd-level free riding problem, if you will. The problem is that it is costly to evaluate others’ epistemic procedures, and that the benefits of living in a community with coordinated epistemic procedures accrue to anyone in that community, whether they evaluate others or not. So it appears that free riding—not evaluating others—is a dominant strategy, which we should expect to overwhelm any other strategy played in the community. Our solution was to point to the benefits of evaluating others: in particular, it increases the amount that others will “pay” for your testimony, and, if you establish a reputation as someone who will evaluate others, it increases your access to testimony.

Graham et al. object to both points. To begin with, they say that we have not done enough to show that evaluation is costly. If it is not, then epistemic communism is a viable strategy for solving problems concerning speaker reliability. They also claim that it is an empirical question as to how costly evaluation can be. We agree, and believe it is also an empirical question with a clear answer.

We suggest that if you are reading this, then you are most likely on a university campus, and that looking out a window will demonstrate exactly how costly epistemic evaluation—an education—can be. But even those evaluations that are relatively inexpensive have non-trivial costs. Graham et al. (2015) say:

[S]ometimes [evaluating someone] may be a piece of cake. If you tell me that Zargon created the universe out of a pizza in less than a day, I reject your belief out of hand. You may even know yourself that I would, and so you don’t even need to say anything; my evaluation might not require any evaluating at all (9).

It is important to remember that it is not beliefs that are evaluated on Dogramaci’s picture, it is the epistemic procedures that produced the beliefs. The belief that Zargon created the universe out of a pizza in less than a day is, indeed, easy to reject out of hand. It is not immediately clear, however, that the epistemic procedure that produced this belief should be rejected.

When formed rationally, one’s beliefs are a joint product of two forces: the information from which one works, and the epistemic procedure that one follows. Given sufficiently bad information, a perfectly good epistemic procedure may lead to very bad beliefs. The belief about Zargon, for example, may have been arrived at inductively, from the premise that Zargon created one universe out of a pizza, and a second, and a third, and so he probably created this one out of a pizza as well. The problem here is not that induction is irrational, it is rather that the person using it is misinformed.

In general, the truth or falsity of a belief does not establish whether or not the procedures that produced it should be evaluated positively or negatively. Even obviously true beliefs can, with a little luck, be the product of bad epistemic procedures operating on mistaken information. The difficult part in evaluating someone is in figuring out why they believe what they do—what combination of beliefs and inferential practices led them to it—and it is only if that can be done, that one can productively evaluate them. Indeed, this is part of what makes teaching difficult; it is easy but not much use to simply tell a student that they have the wrong answer to a question, to teach them well requires taking the time and effort to figure out why they got the wrong answer, and then correct whatever led them astray. And anecdotal evidence that evaluation is costly abounds: this paper would have been finished two weeks ago, but that we spent the intervening time helping our students write final papers, which consists, in large measure, in figuring out how they are thinking about their assignment, so that they can be corrected. Typically, beliefs as bad as those about Zargon will be rare, but as belief reports become more plausible, the effort involved in vetting them, and thus the costs associated with evaluation, will increase.

Now, in Dogramaci’s paper he considers only evaluative behavior that amounts to nothing more than calling someone ‘irrational’ (or the like). We have suggested that even this kind of behavior is costly: as a method of correction it leaves the sender with the difficult problem of reconstructing what the correct epistemic procedures are. There are many ways to be wrong, and only one way to be right; knowing that you are wrong does little to set you on the path to the correct procedure. If this is a way of sharing epistemic procedures, it is a mean one.

So, how else do we evaluate the credentials of the testimony that we are offered? We challenge the agent that made them, and they reply. Their ability to justify their claims, to give reasons, to provide evidence, and to explain where and how they came to know what they purport to know is how we learn what rules gave rise to their belief reports. [6]

Challenges are a means of vetting information. It is how we establish whether or not to trust others and the epistemic procedures they employ. When we challenge others, we indirectly share epistemic procedures with them. The structure of our challenges indirectly instructs others in how to reason like us. On the model we envision, individuals exchange information. We tell someone else something valuable; if they worry about its veracity they vet the information by challenging us to justify it. If we can demonstrate that the report was acquired using procedures they sanction, they accept it.

The interesting case is when we are unable to convince someone that the information was acquired via methods they sanction. They call us irrational. (And we say the same to them.) We both then face a decision: do we shift to a new inferential procedure, or do we continue with our present ones? We believe this is a complex and interesting question, but the short answer is that there is an incentive for all agents to shift inferential procedures when their procedures differ from those who have the reputation for having the most valuable information. The incentive for shifting inferential procedures is to maintain access to the market of information. If we develop a reputation as an outsider, no one will exchange information with us, and we will no longer receive the benefits of communication.

Our solution to the 2nd-level free riding problem is that evaluating others pays. There are two benefits to it. First, if a speaker can spread their favored epistemic procedure, the testimony that they can offer to others will reflect this popular procedure, and so they will be able to secure more value for the testimony that they can offer. Second, those with a reputation for being good at evaluating others will have others provide them with testimony, as correction from such individuals will increase the exchange value of the testimony that they can go on to offer. Graham et al. object to this point as well. They doubt, first, that people pay for testimony in a way that would permit this solution to the problem, and, second, that testimony provided by those seeking correction is likely to be valuable.

The first point we addressed above. The “payment” involved in the exchange of testimony is rarely explicit and formalized, in the same way as payment for, say, apples. But it is still part of a conditional exchange, so there is still something valuable to be had by offering it. We have already seen two examples: those with interesting things to say are, if all else is equal, more likely to have friends, and those who can train other employees are more likely to get hired. And examples are easy to multiply. If I am interested in movies and I know that you know a lot about them, you are the person I’m going to talk to about movies, and this is a benefit to you; conversations are two-way streets, as we are talking I will end up giving you information that I have. So your “payment” for telling me about how they did the special effects in the latest blockbuster may be that I tell you that they are playing the movie in 3D at the local theater this weekend. None of this is explicit, intentional, or formalized, of course, but it is still an exchange. Conversations are economic transactions.

That we don’t typically notice our reciprocal expectations is just an indication of how completely testimonial exchanged is woven into the fabric of our relationships. Empirical studies of communication show that most communication is gossip, i.e. the exchange of social information (Dunbar 2004). When we focus on the exchange of social information, it should be clear how much we guard our talk of others, and how we do not tend to exchange information with those who have nothing to contribute to the water-cooler conversation. To be part of the epistemic community is to engage in the exchange of testimony.

Graham et al. also worry that those with expert reputations will, contrary to our proposal, suffer under the burden of unsolicited and unusable testimony. They contend that information produced by aberrant inferential procedures will be unusable. [7] We disagree, for two reasons. The first is that even false information can be useful. Say that you honestly report a belief that happens to be false. If I know how you reason, and what evidence you are reasoning from—which is what I need to figure out in order to epistemically evaluate you—I can use the fact that you said that x to infer that some other y is true. I do not accept what you say, but that you said it is an important piece of information.

Moreover, being presented with honest but false testimony allows evaluators to improve their position in social space. Knowing when others have false beliefs is power, for it presents opportunities to manipulate informational asymmetries. Your false belief is my chance to accrue a credit in my ledger.

Epistemic Self-Trust

Our second complaint with Dogramaci’s program is that it produces “epistemic alienation”; that is, that people will come to share an epistemic procedure, and will trust testimony that reflects that procedure, but that they will have no reason to do so. If epistemic communism implies that epistemic alienation is widespread, this would be a problem. Part of what makes epistemic communism an interesting and important proposal is that it promises to explain the rational authority of testimony. That is, it would explain why testimony can give us reasons for belief. If epistemic alienation is widespread, we have no such reasons.

It is a vital part of Dogramaci’s program that epistemic evaluations have a non-rational influence on epistemic agents. Calling someone irrational causes them to be less likely to continue using their epistemic procedures, but it does not give them a reason to modify their epistemic behavior. This is important because epistemic communism is supposed to provide some explanation of why we share even our most basic epistemic procedures, those that, because they are the measure of rational change, cannot themselves be changed rationally. Our complaint is that, since epistemic evaluations merely cause us to change our epistemic procedures, once we have, on this basis, changed our procedures, we will have no reason to use the new ones, and no reason to accept testimony that reflects these new procedures. Moreover, since the envisioned effect of a system of epistemic evaluations is the production of one in which everyone alters their epistemic behavior so as to accommodate their fellows, it follows that everyone will suffer from epistemic alienation. And this is an unacceptable consequence.

Graham et al. accuse us of fallaciously inferring from (1) epistemic evaluations give us no reason to change our epistemic rules, to (2) once we have changed our rules on the basis of these evaluations, we have no reason to follow these rules (or to accept testimony that reflects them). They interpret Dogramaci as saying that the origins of one’s rules do not matter to questions about their reason-giving potential.

Epistemic alienation is an instance of a more general phenomenon. Notice, to begin with, that the origin of an action can determine whether or not the actor has a reason to perform it. Those with obsessive-compulsive disorder may repeatedly wash their hands, though they realize that they have no reason to do so and that their behavior is simply a manifestation of their illness. The burden of obsessive-compulsive disorder is not exhausted by the pointless action that it produces, it is also, in part, that one chooses to do something that one has no reason to do. Such individuals are alienated from their behavior, even as they engage in it.

The behavior of the obsessive-compulsive and of those who suffer from epistemic alienation are not perfectly analogous, because those who are epistemically alienated endorse their behavior (and the testimony that they receive) in a way that the obsessive-compulsive does not. So consider the mechanism that Dogramaci identifies as enforcing epistemic uniformity. It is, he says, the desire for the esteem of one’s fellows.[8]When we meet with abuse, we become more likely to change our behavior, and when we meet with praise, to repeat it. (More generally, abused behavior tends not to be repeated and praised behavior to be repeated.) In short, Dogramaci’s suggestion is that our epistemic behavior is shaped by peer pressure.

Peer pressure is a paradigmatic example of a non-rational influence. That all of the other kids are smoking cigarettes does not give Timmy a reason to follow suit, though he will. This example is instructive because peer pressure leads Timmy not simply to smoke cigarettes, but to endorse smoking cigarettes. If you ask Timmy why he smokes cigarettes he is likely to say something like “it’s cool”, where this does not mean that all of the other kids do it, but is supposed to pick out the reason that all of the other kids do it. Nevertheless, it seems clear to us that Timmy does not have a reason to smoke cigarettes.[9] When he is a little older Tim may look back on his misspent youth and regret, not that he answered to reasons that he no longer endorses, but that he acted against the reasons that he had at the time. Retrospectively, smoking cigarettes, and his endorsement of the practice, may seem like an outside imposition, though one to which he willingly but foolishly acquiesced.

Now, Dogramaci does say: “I am, of course, taking it for granted here that, somehow or other, we each rationally believe, of our own actual rules, that they are reliable. If we couldn’t, we would be forced into radical skepticism.”[10] Dogramaci’s story is a, broadly speaking, game-theoretic one. There is some initial state, in which everyone uses their own epistemic procedures (likely, but not necessarily, varying by person). Dogramaci is asking us to consider our actual state as the one in which we already have reason to trust our rules, and then goes on to talk about how these rules can evolve. What is necessary to avoid sliding into skepticism is the assumption that in this state everyone has a reason to trust their epistemic procedures. If it can be shown—and we argued that it could in “Epistemic Free Riders and Reasons to Trust Testimony”—that reasons for trust transfer with changes due to epistemic evaluations, then the stronger assumption, that for any rules that anyone uses, they have a reason to use those rules, need not be made.

This is fortunate, for it is clearly false. Those who live in a totalitarian society may infer from “the Glorious Leader did x” to “x must be for the good of the people”, because if they did otherwise it may be reflected in their behavior and so incur the wrath of the secret police. And they may, on the same grounds, accept testimony that reflects beliefs produced through the same rule of inference, all the while knowing that the rule of inference is fallacious. Such individuals—and there surely are such people—suffer from epistemic alienation, at least in the sense that they have no epistemic reasons for accepting their epistemic procedures, nor the testimony that results from their use. So it simply cannot be true that everyone has reason to accept just any epistemic procedures that they actually use, nor that everyone think that their procedures are reliable.

Charitably interpreted, Dogramaci’s claim about our opinions regarding the reliability of our own epistemic procedures is not a claim about what everyone actually thinks about their epistemic procedures. Rather, it is the assumption that there is some initial position at which we regard our own procedures as reliable, because the (broadly speaking) game-theoretic model that Dogramaci develops requires that the participants begin from a position in which they are not alienated from their epistemic procedures.

There remains a problem about how this whole process gets started. The concern is this: however one acquires one’s first epistemic procedure it must be through some non-rational means, since, by hypothesis, one’s first epistemic procedure could not have been acquired rationally. Problems of this sort are, of course, familiar in philosophy, and there are familiar strategies for dealing with them.[11] We suspect, however, that this concern is really a red herring. The problem of epistemic alienation arises when one replaces an epistemic procedure through non-rational means. That there was a point at which anyone was without any epistemic procedures strikes us as unlikely.

Graham et al., speaking on behalf of Dogramaci, say: “You could be the product of evolution by natural selection, or a random accident. Either way, you are rational in believing your rules are reliable, and so you have reasons for the beliefs so formed.”[12] The implication here is that epistemic rules that have their origins in non-rational causes may be reason-giving. We are happy to admit that this is true. But it is not as though one acquires one’s first epistemic procedures from natural selection, as one acquires later procedures from one’s fellows. Prior to the operation of natural selection there is no one to do the acquiring. Rather, one’s natural epistemic inheritance is the background condition against which the rationality of one’s future epistemic procedures is measured. Without it, it would not make sense to ask if one’s acquisition of a new epistemic procedure does, or does not, produce epistemic alienation. From the fact that one acquired a new epistemic procedure through non-rational means, it follows, absent other information, that one has no reason to accept that procedure, and that one has no reason to accept testimony that reflects this procedure. Where Graham et al. are right is in thinking that from the fact that an epistemic rule has a non-rational origin, it does not follow that one has no reason for accepting it. But “being acquired in a non-rational way” is a special case of “having a non-rational origin”, and the features of this special case deprive the resultant epistemic procedures of their reason-giving force.

References

Dasgupta, Partha. “Trust as a Commodity.” In Trust: Making and Breaking Cooperative Relations, edited by Diego Gambetta, Oxford: Blackwell, 49-72, 1988.

Dawkins, Richard and John R. Krebs. “Animal Signals: Information or Manipulation?” In Behavioural Ecology: An Evolutionary Approach, 1st edition, edited by John R. Krebs and Nicholas B. Davies, 282-309. Oxford: Blackwell, 1978.

Dogramaci, Sinan. “Reverse Engineering Epistemic Evaluations” Philosophy and Phenomenological Research 89, no. 3 (2012): 513-530.

Dunbar, Robin. “Gossip in Evolutionary Perspective.” Review of General Psychology 8, 100-10, 2004.

Fricker, Miranda. “Scepticism and the Genealogy of Knowledge: Situating Epistemology in Time.” Philosophical Papers 37, no. 1 (2008): 27-50.

Graham, Peter, Zachary Bachman, Meredith McFadden, and Megan Stotts. “Epistemic Evaluations: Consequences, Costs, and Benefits.” Social Epistemology Review and Reply Collective 4, no. 4 (2015): 7-13.

Krebs, John R. and Richard Dawkins. “Animal Signals: Mind-Reading and Manipulation?” In Behavioural Ecology: An Evolutionary Approach, 2nd edition, edited by John R. Krebs and Nicholas B. Davies, 390-402. Oxford: Blackwell, 1984.

Mercier, Hugo and Dan Sperber. “Why Do Humans Reason: Arguments for an Argumentative Theory,” Behavioral and Brain Science 34, (2011): 57-111.

Sellars, Wilfrid. “Language as Thought and as Communication,” Philosophy and Phenomenological Research, 29 (1969): 506-527.

Tebben, Nicholas and John Waterman. “Epistemic Free Riders and Reasons to Trust Testimony.” Social Epistemology (2014): 1-10. doi: 10.1080/02691728.2014.907835.

Tebben, Nicholas and John P. Waterman. “Counterfeit Assertions.” (in preparation).

Williams, Michael. “Responsibility and Reliability,” Philosophical Papers 37, no. 1 (2008): 1-26.

Wittgenstein, Ludwig. On Certainty. Edited by G.E.M. Anscomb and G.H. von Wright, translated by Denis Paul and G.E.M. Anscomb. New York: Harper & Row, 1969.

[1] Graham et al. 2015.

[2] In connection with the evolution of communication see Dawkins and Krebs 1978, Krebs and Dawkins 1984, and Mercier and Sperber 2011.

[3] Compare this point to Kant’s argument that maxims which permit lying are not permitted by the categorical imperative.

[4] There are atypical cases, however. A competent but consistently insincere speaker is as useful as a competent and consistently sincere one. (See Dasgupta 1988.)

[5] On the question of sincerity, see our “Counterfeit Assertions”, presently in preparation. (E-mail one of us if you would like a copy of the draft.)

[6] See, for instance, Fricker 2008 and Williams 2008 for epistemic models that employ this default and challenge structure of justification

[7] They are also concerned about irrelevant information. This concern is less serious than the one about false information. Saying things that are irrelevant, given the context of conversation, violates a Gricean maxim. This is fine when it is done to produce a conversational implicature. But saying something irrelevant when one does not intend to produce a conversational implicature is a good way to get people to stop talking to you. Well-socialized adult speakers simply know better than to talk about French culture when their conversational partners are trying to figure out how to get from Phoenix to LA.

[8] See Dogramaci 2012, 8.

[9] Or, at least, peer pressure has not given him one. Just as Dogramaci allows that we do have reasons to hold, or change, our epistemic procedures, in addition to the influence of our fellows, so we should admit that Timmy might have reasons to smoke cigarettes, in addition to the influence of his fellows.

[10] Dogramaci 2012, 12.

[11] There is the “it emerges all-at-once” strategy, which Wittgenstein, for example, adopted to explain the origins of our beliefs. “When we first begin to believe anything” he says “what we believe is not a single proposition, it is a whole system of propositions. (Light dawns slowly over the whole.)” (Wittgenstein 1969, paragraph 141.) Another strategy is to say that there is a simple system out of which a more complex one emerges. For example, Sellars uses this strategy to explain how we come to be subject to norms of language use. (See Sellars 1969.)

[12] Graham et al. (2015), 13.



Categories: Critical Replies

Tags: , , , , , , , ,

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading