The Epistemic Challenge of Religious Disagreement: Responding to Matheson, John Pittard

I am grateful for Jonathan Matheson’s recent review (Matheson 2020) of my book, Disagreement, Deference, and Religious Commitment (Pittard 2019). Matheson’s excellent summary reflects a very careful reading, and his critical commentary offers important objections that deserve reflection and response. I briefly respond to those objections here, defending (and perhaps clarifying) my position and approach to the topic… [please read below the rest of the article].

Image credit: Pixabay from Pexels

Article Citation:

Pittard, John. 2020. “The Epistemic Challenge of Religious Disagreement: Responding to Matheson.” Social Epistemology Review and Reply Collective 9 (9): 55-64.

🔹 The PDF of the article gives specific page numbers.

This article replies to:

❧ Matheson, Jonathan. 2020. “Debating the Significance of Disagreement: A Review of John Pittard’s Disagreement, Deference, and Religious Commitment” Social Epistemology Review and Reply Collective 9 (7): 36-44.

The central question of Disagreement, Deference, and Religious Commitment (henceforth, DDRC) is whether someone who is aware of the nature and extent of religious disagreement can rationally maintain confident belief in some controversial religious outlook. (For my purposes, an explicitly irreligious view like secular atheism counts as a “religious outlook.”) Consider some subject, let’s call her Sarah, who rationally evaluates the evidence available to her that bears on religious matters and who correctly judges that, setting facts about religious disagreement aside, her evidence supports some controversial religious outlook R. Let’s suppose that Sarah is well-informed about other religious outlooks and the reasons offered in their favor. While Sarah judges that R has greater evidential support than the disjunction of the competing outlooks, she acknowledges that many who adopt contrary religious positions, her religious “disputants,” appear to be as thoughtful, intelligent, and informed on religious matters as she is. Sarah also knows that the number of qualified thinkers who reject R is at least as great as the number of similarly qualified thinkers who affirm R.

Could Sarah be rational in maintaining confident belief in R? Many philosophers say no. While facts about religious disagreement may not provide straightforward counterevidence to R, such facts may still “defeat” Sarah’s belief in R (i.e., render it unjustified) by bearing on the “higher-order” question of whether she has assessed the evidence pertaining to R in a reliable way. The nature of religious disagreement shows that even those who appear highly qualified to assess religious questions do not reliably arrive at true religious beliefs. In light of this fact, it would seem that Sarah cannot rationally remain confident unless she has good reason to think that she is more likely than her disputants to have formed correct religious views. But disagreement skeptics say that because Sarah’s disputants are just as qualified and numerous as those who agree with her, she does not have a good reason to think that her side of the dispute is more likely to be correct.

In opposition to this skeptical argument, one might argue that even if Sarah concedes that her disputants are as qualified and rational as herself, she still has more reason to trust whatever intuitions and intellectual instincts happen to be hers than the similar intuitions and instincts of others. On this view, it is rational to give more weight to one’s own “doxastic inclinations” even when one acknowledges that, from a third-person point of view, there is perfect rational symmetry. Alternatively, one might argue that Sarah can reasonably maintain healthy confidence in R if she insightfully sees that her reasoning has greater cogency and force than the reasoning of her disputants. On this view, Sarah’s justification to remain confident is not explained by some principle that says to privilege whatever inclinations and intuitions happened to be one’s own, but rather by the objective rational merits of her thinking on the disputed matter.

I argue in DDRC that to effectively block these sorts of responses, the proponent of disagreement-motivated religious skepticism must endorse a demanding epistemic impartiality constraint. This impartiality constraint says that rational responses to disagreement must avoid both agent partiality (giving more weight to seemings/inclinations just because they are one’s own) and reasons partiality (maintaining confidence on the basis of contested reasoning that is not accepted by one’s disputants).

I agree with disagreement skeptics that agent partiality is irrational. But I argue that reasons partiality can be appropriate, at least in cases when one has genuine insight into the greater rational merits of one’s own position.

How to Characterize the Worry Posed by Religious Disagreement

Matheson’s first set of criticisms (41–2) concern my characterization of the skeptical challenge posed by religious disagreement. In Chapter 1 of DDRC, I present the “master argument” for disagreement-motivated religious skepticism that serves as the focus of much of the rest of the book. The master argument focuses on some subject S who is a religious believer and who is aware of the present nature of religious disagreement. According to the argument, S lacks justification for believing that her religious beliefs are the product of a reliable process; and without justification for believing this, S’s religious outlook is not justified. Matheson suggests that such an argument that is focused on process reliability does not get at the heart of the most serious worry posed by disagreement, which is a worry that is (i) about the correctness of S’s views rather than about the reliability of her belief-forming process and (ii) about propositional justification (which roughly has to do with the availability of justifying grounds for the belief) rather than doxastic justification alone (which has to do with whether S’s belief is actually justified given the way she holds the belief and the grounds on which her belief is actually based).

I should say at the outset that I agree with Matheson that there are perfectly good ways of formulating the skeptical challenge from religious disagreement that do not reference the reliability of belief-forming processes. Taking cues from Matheson, one could formulate the challenge as follows:

    (1) Given S’s knowledge of religious disagreement, S’s religious outlook is justified only if S has justification for believing that the best explanation of the religious disagreement is that her disputants (and not herself) have inaccurate religious views.

    (2) S does not have justification for believing that the best explanation for the religious disagreement is that her disputants (and not herself) have inaccurate religious views.

    (3) Therefore, S’s religious outlook is not justified.

To give a viable defense of (2), the disagreement skeptic would then (I suggest) have to argue that S has justification for a favorable explanation of the disagreement only if S has a good impartial reason to affirm such an explanation. On this framing, as on my framing, much of the debate would hinge on whether there is good motivation for such a demanding impartiality constraint.

I think Matheson is right that framing the challenge along these lines has certain advantages over the way I formulated the master argument.[1] But the primary reason I formulated the master argument the way I did is that it more closely resembles many of the actual arguments given by proponents of disagreement-motivated religious skepticism, arguments which frame the challenge as a disagreement-motivated worry about the reliability of one’s process of religious belief-formation (Hick 1997; Goldberg 2014; Kitcher 2014). If I took as my focus an argument that bore little resemblance to the arguments of these disagreement skeptics, one might question whether their arguments have strengths that are lacking in the argument taken as my focus.

The Reliability of Belief Processes

Even so, it would be a problem if, as Matheson alleges, the master argument failed to get at the heart of the skeptical challenge posed by religious disagreement. But I do not think the argument has this failing.

First, let’s start with Matheson’s claim that the skeptical worry posed by religious disagreement is a worry about the correctness of one’s religious outlook rather than a worry about the reliability of one’s process of religious belief formation. Of course, evidence of unreliability is worrying precisely because unreliable processes are liable to produce incorrect beliefs. But as Matheson notes, reliable processes can also lead to false beliefs (since reliability does not entail infallibility) and in some cases disagreement may arise because someone arrives at a false belief by means of a broadly reliable process. For this reason, it may be a mistake to equate a worry about correctness with a worry about process reliability.

In response, note first that a token process of belief formation can instantiate many different types of processes at different levels of generality. Many cases where a person arrives at a false belief by means of some generic/broad process type that is reliable can also be accurately described as ones where someone arrives at a false belief by means of some more specific/narrow process type that is unreliable. Likewise, many disagreements where it is known that both parties employ a broad process type that is reliable are also disagreements where at least one party employs a narrow process type that is unreliable. Indeed, the restaurant check case (Christensen 2007) that Matheson refers to is one such case.

Suppose my friend and I reach slightly different answers when we do some math in our heads to determine how much each person in our group of seven should pay to cover some restaurant bill. Even if my friend and I employed the same generic process of mathematical calculation, the fact that we reached different conclusions reveals that there must be some difference in the narrow processes of reasoning by which we reached our answers. (For example, perhaps my procedure, unlike my friend’s, involved an instance of failing to “carry a one” while doing long addition.) The disagreement provides evidence that my narrow belief-forming process was unreliable and has for this reason produced an incorrect belief. So even in this example, the worry about correctness can also be framed as a worry about process reliability.

Perhaps there are some disagreements that cannot be accurately described as resulting from at least one person employing an unreliable process, even when we attend to the narrowest and most specific process types in play (though I think that such disagreements will be few and far between if we are not artificially restrictive in what variables and parameters we allow in the definition of a process type). But is such a possibility relevant in the real-world case of religious disagreement? It seems not. It is not plausible that for most religious believers, the wide and narrow processes used to arrive at religious beliefs are highly reliable but have simply “misfired” and produced an abundance of false religious beliefs. Quite clearly, a better explanation of religious disagreement is that a great many people have formed their religious beliefs by means of unreliable processes. The worry about correctness posed by religious disagreement is also a worry about reliability.

What about the second charge, that the master argument misconstrues the worry posed by disagreement as a worry about doxastic justification only, when in fact disagreement threatens propositional justification? On its face, the master argument does not take a stand on whether S’s lack of doxastic justification in believing R is due to a lack of propositional justification for believing this outlook. The fact that the argument is noncommittal on this front is an advantage, not a shortcoming. It is, in fact, controversial whether cases of epistemic defeat in the face of disagreement (or other higher-order evidence) are best explained in terms of a loss of propositional justification.

Van Wietmarschen (2013) and Smithies (2019, chap. 10) both endorse broadly “conciliatory” approaches to disagreement and higher-order evidence, but they argue that epistemic defeat in the face of worrying higher-order evidence is not to be explained by a loss of propositional justification. In the case of Smithies (2019, 327–30), negative but misleading higher-order evidence about my belief that p does not undermine propositional justification, but instead prevents me from basing my belief on the factors that continue to make the belief propositionally justified. Given this controversy, it would seem better to remain neutral (if at all possible) between accounts of higher-order defeat that see higher-order evidence as undermining propositional justification and those that do not.

Matheson may still worry, though, that the master argument merely attacks the specific process by which S forms or maintains belief in R rather than showing that, in light of religious disagreement, justified belief in R is not a possibility for S (as would be the case if S lacked propositional justification for believing R or was somehow unable to take advantage of whatever propositional justification is available). If this characterization of the argument was correct, this would be a problem. For example, if the master argument assumed that S’s religious beliefs were formed on the basis of unreflective trust in some alleged authority, and attacked S’s beliefs on that basis, then the argument would not do justice to the skeptical worry posed by disagreement. After all, disagreement seems to threaten any controversial religious belief, not only those beliefs that are formed in some defective or problematic way.

But the master argument is not narrowly focused on some particular process of religious belief formation. It focuses on a generic religious believer S, without making any stipulations about the specific process giving rise to her beliefs. Granted, one might worry whether the argument’s premises must be defended in a way that relies on certain assumptions about the specific manner S forms and maintains her religious beliefs. But the defense of the master argument that I claim is most promising does not make any such assumptions. All that is assumed is that S lacks a good impartial (and internally accessible) reason for believing that her process of religious belief-formation is significantly more reliable than the collective reliability of the processes that (otherwise) epistemically qualified people use to form religious beliefs. Since this seems to apply to informed religious believers however they may form their religious beliefs, the master argument does not aim to show only that some specific process fails to confer doxastic justification. Rather, it aims to show that justified religious belief is not available to those who are aware of religious disagreement.

Partisan Justification and Rational Insight

Matheson’s next set of objections (pp. 42–3) concern my attempt to resist disagreement-motivated religious skepticism by appealing to a rationalist account of what I call “partisan justification.” Roughly, someone has partisan justification for their belief that p when they are justified in assigning p a higher credence than can be justified on purely impartial grounds (i.e., grounds that satisfy the strictures of both agent and reasons impartiality).

It’s important to note that partisan justification is intended as a notion that can apply to one’s credences before a disagreement as well as to one’s credences after a disagreement. This may seem perplexing: how can one’s credences for p count as either “partisan” or “impartial” before one even knows that other people think about p? The following schematic example might help. Suppose I do not yet know what anyone thinks about p. I now reflect on p and come to believe p on the basis of a cogent line of reasoning that demonstrates the truth of p. I also rationally think that, conditional on some epistemic peer taking himself (rightly or wrongly) to grasp the cogency of some line of reasoning that demonstrates whether p is true or false, the probability that this peer is correct about p is 0.9.

What should my credence for p be at this stage? Assigning p a credence of 0.9 is arguably the “impartial” starting point in this case since this is the credence I would assign to the opinion of some epistemic peer if, instead of reflecting on p for myself, I learned only that this epistemic peer took herself to grasp the cogency of some demonstration of the truth or falsity of p. But if I have partisan justification at this initial stage, then I can justifiably assign some higher credence value to p (0.95, say). On my rationalist view, this higher credence could be justified since genuine insight into the cogency of some line of reasoning can have greater rational weight than merely learning that some epistemic peer takes himself to have such insight.

In chapter 3 of DDRC, I argue that “strong conciliationism” (which says that we should nearly always respond to disagreements in a way that is epistemically impartial) is correct only if, even prior to learning of disagreement, we either never have partisan justification or have it only for some very narrow class of beliefs. But I further argue that we do sometimes enjoy partisan justification and that there is no principled reason to think that it is highly limited in scope. For this reason, strong conciliationism is dubious.

Matheson is happy to grant that we often have partisan justification for beliefs when we do not yet know whether those beliefs are contested. But he claims that this gives us no reason to think that partisan justification persists in the face of disagreement. On Matheson’s conciliatory view, whatever partisan justification one may have enjoyed prior to a disagreement is defeated in the face of disagreement with suitably qualified disputants, so that after learning of such a disagreement one’s credence must be fully impartial. As Matheson sees it, the question of whether we have pre-disagreement partisan justification has no bearing on the correctness of strong conciliationism, which is a view about what is rational after learning about a disagreement.

Matheson’s complaint here does not engage with my argument for why a viable conciliatory position is ultimately committed to the view that one’s pre-disagreement credences should exhibit impartiality of the sort described above. As I argue (building on White (2009)), the strong conciliatory requirement that one have impartial post-disagreement credences is compatible with the requirements of Bayesian conditionalization only if one’s pre-disagreement credences are also impartial in the relevant way (DDRC, 98–101, 110–14). This means that the strong conciliationist must choose between the following: deny that we should (typically) respond to disagreements by conditionalizing on the evidence of the disagreement or maintain that partisan justification is (typically) not available even before we learn about a disagreement.

I argue that severe objections apply to a conciliatory position that opposes Bayesian conditionalization. Thus, the only viable option for the strong conciliationist is to hold that our pre-disagreement credences should be impartial, so that partisan justification is not available even before a disagreement. If that is right, then my argument that we often do enjoy partisan justification is ipso facto an argument against strong conciliationism.

It’s important to note that credences that are justified prior to learning of a disagreement are not always justified on partisan grounds. In some cases, the factors that help to justify my pre-disagreement confidence may all be “impartial” factors. For example, when I confidently judge that more than hour has passed in some conversation, my confidence is grounded in the fact that it seems to someone who is typically good at judging elapsed time (namely, myself) that over an hour has passed. I do not assign any special weight to the fact that it seems to me that over an hour has passed; I trust my seeming only because of my track record and would equally trust the similar seeming of someone with a similar track record. Because my justifying grounds are all impartial in this sort of case, I have no reason to retain significant confidence in my view when I learn that my conversation partner, whom I have just as much reason to trust, judges that significantly less than an hour has elapsed. In cases where there is no pre-disagreement partisan justification, the strong conciliationist is arguably correct that impartial post-disagreement credences are required.[2]

When Do We Have Partisan Justification?

A key question, then, is the question of when we do and do not have partisan justification. In DDRC, I argue for an exclusively rationalist account of partisan justification which roughly says that one has partisan justification only when one has genuine rational insight into the truth or greater plausibility of one’s view. If this position is right, then prescriptions favored by strong conciliationism should generally be correct in cases where such insight is lacking and generally incorrect in cases where such insight is present. But Matheson suggests that the divide between disagreement cases where impartial conciliation is clearly required and those where it is not fails to align with the divide between disagreements where one’s view is supported by rational insight and those where it is not. To support this point, Matheson suggests that the restaurant case, where impartial conciliation is clearly appropriate, could also be a case where one of the parties to the dispute has genuine insight.

The restaurant case is instructive, but it supports my view rather than challenging it. Suppose the posttax total for seven people is $250.95 and that I correctly calculate in my head that, after adding 20% for tip, dividing by seven, and rounding to the nearest dollar, each person’s share is $43. When I reach this answer, do I have insight into its correctness? Can I insightfully “see” that $43 is indeed each person’s share? Hardly! I likely do not even remember all of the small calculations that informed my answer, and I certainly cannot survey all those steps at once in order to appreciate the correctness of that reasoning in its entirety.

Next, when I learn that my friend arrived at an answer of $45, do I have insight into the greater plausibility of my answer than my friend’s? Probably not, given how close the answers are. So this is not a case where I have insight into the truth or greater plausibility of my view. But suppose we consider versions of the case where there is a greater divergence in the answers reached by me and my friend. For example, suppose my friend says that each person’s share is $73, or $103, or $205. As the spread between my answer and my friend’s answer increases, the suggestion that I might have insight into the greater plausibility of my view becomes more credible. But in such cases, it is also plausible that I should continue to have more confidence in my answer than in my friend’s, even if we stipulate parity in track record, felt confidence, and so on.

Matheson further objects that because we sometimes mistake merely apparent insight for genuine insight, insight cannot supply an internally accessible reason for continued confidence in the face of disagreement. In making this (natural and important) objection, Matheson seems to presuppose some principle like the following: the fact that I am in state ST is not internally accessible to me if I know that, when I am not in state ST, I often mistakenly think I am in ST. Part of my argument against this sort of principle in DDRC makes use of the following example (DDRC, 130–2).[3] Veronica often dreams she is playing soccer, and occasionally she actually plays soccer during her waking hours. She knows that two thirds of the times when she thinks she is awake and playing soccer, she is in fact asleep. Her soccer dreams are not especially realistic, but when she asks herself in her soccer dreams whether she is awake, she always answers in the affirmative, mistakenly thinking that her experience exhibits the normal coherence and vividness.

Now, when Veronica is actually awake and playing soccer, can she justifiably be confident that she is awake? Surely she can. Moreover, there is also reason to think that there are internally accessible factors (like the vividness and coherence of her experience) that justify this confidence. For if we changed the case and imagined that Veronica’s dreams are internally just like her waking states, then it seems that she would no longer be justified in thinking that she is awake when she is actually playing soccer. If this diagnosis is right, then there are internally accessible factors that justify Veronica in thinking that she is awake even though, when she is not awake, she often mistakenly thinks that she is aware of such factors. I suggest that the case of rational insight is similar: while it is true that I sometimes mistakenly think that I have insight into the cogency of some bit of reasoning, this need not rule out the internal accessibility of the cogency of my reasoning on those occasions when my thinking is genuinely insightful.

Rational Action and Normative Uncertainty

In part II of DDRC, I consider the implications for religious belief and religious commitment if we accept the sort of strong conciliationism that I oppose. Assuming that the strong conciliationist must reduce confidence in her favored religious outlook, can she nonetheless make rational decisions about how to live in light of her religious uncertainty? I argue that strong conciliationism requires a thoroughgoing kind of normative uncertainty that prevents such rational decision-making. The impediment to rational action does not stem from the mere fact that there is first-order uncertainty about what one should do, but from the fact that there is also second-order uncertainty about how to rationally respond to this first-order uncertainty, third-order uncertainty about how to rationally respond to second-order uncertainty, and so on. The argument for why this iterated normative uncertainty undermines rational action is somewhat involved and I will not try to summarize it here. But I will briefly address one concern raised by Matheson.

According to Matheson (43), “Pittard assumes a close parallel between epistemic reasons and practical reasons (that we must be aware of each to possess them and that they each can be defeated by our being justified in believing that they do not exist). However, unlike epistemic reasons, pragmatic reasons appear to exist independent of our awareness of them, and they are not defeated by our uncertainty of their existence.” Appealing to the example of Pascal’s wager (which is a pragmatic argument for theistic belief), Matheson then makes a somewhat stronger claim when he writes that “a justified suspension of judgment about the successfulness of the wager would not prevent us from having those reasons to believe (if Pascal is indeed correct).”

As I understand Matheson, he is taking issue here with a thesis I call the “endorsement requirement,” which says that “rational actions must cohere in some way with the [doxastic] attitudes that one has justification to hold on normative matters” (DDRC, 300).

Oversimplifying somewhat, the endorsement requirement can be understood as saying that a fully rational agent must believe, or have justification to believe, that his actions are rational or appropriate.[4] While Matheson seems to be targeting the endorsement requirement, it is not entirely clear that the requirement is in tension with Matheson’s claims. After all, the endorsement requirement does not say anything about when a person counts as having a good reason to act.

Matheson’s claims challenge the endorsement requirement only if we assume that there are situations where it is true both that (i) S performs action A even though S neither endorses nor has justification to endorse this action and (ii) S is rational in so acting because (or at least partly because) S has a good reason to perform A. Is this a plausible position? I am happy to grant that someone might act rationally without explicitly endorsing their action as rational. And I might even concede that one could act rationally without awareness of good reasons for one’s actions. What the endorsement requirement denies is that an agent can act rationally even though she lacks justification to believe that her action is rational or appropriate. Someone who affirms that there can be situations where (i) and (ii) are satisfied takes the opposite position, holding that the rationality of an action can come apart from the availability of grounds to justifiably endorse the action.

Just how far is Matheson prepared to go in divorcing the rationality of a person’s action from the question of which actions he or she has justification to endorse? Would Matheson hold that I could rationally perform action A even though I am rationally required to believe (and in fact do believe) that doing A is entirely irrational and inappropriate in my situation? A view that affirms the rationality of this kind of self-indictment is dubious. And as I attempt to argue in the final chapter DDRC, once we posit an inter-level coherence requirement that rules out the rationality of such self-indictment, there is good reason to think that the stronger endorsement requirement is also correct. If that is right, then any context where normative disagreement undermines our justification to endorse our own actions is also a context where we cannot act rationally.


Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” Philosophical Review 116 (2): 187–217.

Goldberg, Sanford. 2014. “Does Externalist Epistemology Rationalize Religious Commitment?” In Religious Faith and Intellectual Virtue, edited by Timothy O’Connor and Laura Frances Callahan, 279–98. Oxford: Oxford University Press.

Hick, John. 1997. “The Epistemological Challenge of Religious Pluralism:” Faith and Philosophy 14 (3): 277–86.

Kitcher, Philip. 2014. Life After Faith: The Case for Secular Humanism. New Haven: Yale University Press.

Matheson, Jonathan. 2020. “Debating the Significance of Disagreement: A Review of John Pittard’s Disagreement, Deference, and Religious Commitment.” Social Epistemology Review and Reply Collective 9 (7): 36–44.

Pittard, John. 2019. Disagreement, Deference, and Religious Commitment. New York: Oxford University Press.

Smithies, Declan. 2019. The Epistemic Role of Consciousness. New York: Oxford University Press.

White, Roger. 2010. “You Just Believe That Because….” Philosophical Perspectives 24 (1): 573–615.

Wietmarschen, Han van. 2013. “Peer Disagreement, Evidence, and Well-Groundedness.” Philosophical Review 122 (3): 395–425.

[1] In particular, an alternative framing like the one just sketched does not immediately face the “generality problem” that confronts an argument that is concerned with whether one has justification to affirm the reliability of one’s belief-forming process.

[2] Contrary to what Matheson suggests, I do not exhibit inconsistency when I allow that proper functionalism may account for the pre-disagreement rationality of religious belief but cannot account for the continued rationality of religious disagreement in the face of disagreement. On my view, proper functionalist accounts establish at best a kind of impartial justification. But partisan justification is needed to mitigate the skeptical threat of peer disagreement.

[3] For a similar appeal to dreams, see (White 2010, 603–4).

[4] Some action might be “appropriate” not because it satisfies the requirements of ideal rationality, but because it is a reasonable action in light of one’s uncertainty about which actions do satisfy these requirements.

Categories: Books and Book Reviews

Tags: , , , , , , , , , , , , ,

Leave a Reply