I’m honoured by Chris Ranalli’s (2020) thought-provoking response to my recent article, “The Epistemic Benefits of Worldview Disagreement” (2020a), which is an expansion of ideas found in my book, The Epistemic Benefits of Disagreement (2020b). I’m also grateful to the editors of SERRC for the opportunity to address Ranalli’s criticisms in the form of a rejoinder. There are numerous issues that Ranalli highlights about my original article that are worthy of serious consideration. However, I will mostly focus on addressing three of the more pressing worries that he raises. First, he worries about the pernicious effects that cognitive biases might have on worldview evaluation and also challenges the normative role that I attribute to epistemic peerhood. Second, he suggests that there are problems for the claim that it’s necessary to evaluate one’s worldview upon discovering that there is an epistemic peer who disagrees with it. Finally, third, he poses some worries about appealing to future epistemic benefits to justify remaining steadfast in the face of worldview disagreement. After briefly providing more details about the context of our discussion, I will address each of these concerns in turn. While I’m doubtful that I can satisfy Ranalli (and those who share his worries), I take this as opportunity to better clarify my position…. [please read below the rest of the article].
Lougheed, Kirk. 2020. “Epistemic Elitism, Scepticism, and Diachronic Epistemic Reasons: A Rejoinder to Ranalli on Worldview Disagreement” Social Epistemology Review and Reply Collective 9 (11): 44-52. https://wp.me/p1Bfg0-5wy.
🔹 The PDF of the article gives specific page numbers.
❧ Ranalli, Chris. (2020). “Rationally Maintaining a Worldview” Social Epistemology Review and Reply Collective 9 (11): 1-14.
❦ Lougheed, Kirk. 2020. “The Epistemic Benefits of Worldview Disagreement” Social Epistemology.
- The Epistemic Benefits of Worldview Disagreement
The epistemology of disagreement literature focuses on questions about how one should respond to discovering epistemic peer disagreement about a proposition that one believes. The literature often focuses on highly idealized cases of disagreement between two peers over just one isolated proposition, and as such it usually isn’t the case that the disagreement can be explained by the fact that the peers in question have different fundamental assumptions, i.e., different worldviews. In the Epistemic Benefits of Disagreement, I argued for a limited version of non-conciliationism (2020b). In cases of peer disagreement about a proposition in the context of inquiry, a researcher is sometimes justified in remaining steadfast if doing so will likely lead to epistemic benefits. However, in “The Epistemic Benefits of Worldview Disagreement,” the paper that Ranalli targets, I try to expand this idea to broader worldview disagreements. These are disagreements between peers about fundamental commitments. I think that often when peers disagree over a specific proposition (at least about non-trivial matters) it can be explained by the fact that they disagree about many related issues and ultimately by the fact that major parts of their worldviews are at odds with each other. What I tried to do is expand the argument I make in the book about future epistemic benefits to apply to (some cases) of worldview disagreement. Here’s the standardized version of the argument.
The Epistemic Benefits of Worldview Disagreement Argument
(1) If agent S encounters epistemic peer disagreement over proposition P and subsequently discovers that disagreement over P entails a disagreement over her worldview W (a set of propositions including P), then in order to rationally maintain W she should examine whether W is theoretically superior to the competing worldview.
(2) If S evaluates the theoretical virtues of W, then S will gain a better understanding of W, including being better informed about the truth of W.
(3) S discovers an epistemic peer who believes not-P.
(4) S subsequently discovers that the disagreement about whether P entails a disagreement between two competing worldviews W and W*.
(5) In order to rationally maintain W, she should examine whether W is theoretically superior to W*.
(6) S should evaluate the theoretical virtues of W.
(7) S will gain a better understanding of W, including being better informed about the truth value of W (see Lougheed 2020, 6).
I will not spend time explicating my defense of this argument. I refer readers to the original article (Lougheed 2020a). Instead, it’s more important to be clear about how Ranalli interprets my argument. He claims that it commits me to one of the following principles:
Theoretical Superiority Examination: In order to rationally maintain W, she should examine whether W is theoretically superior to W*.
Theoretical Evaluation → Benefits Principle: If S evaluates the theoretical virtues of W, then S will gain a better understanding of W and become better informed about the truth of W (Ranalli 2020, 4)
Likewise, he also says I am committed to either:
Necessity of Theoretical Superiority: if S rationally maintains W, then S should rationally examine whether W is theoretically superior to the competing presented worldview W*.
Sufficiency of Theoretical Superiority: If S should rationally examine whether W is theoretically superior to the competing presented worldview W*, then S rationally maintains W (Ranalli 2020, 5).
I can see that I was not adequately clear, and as such Ranalli is right to press me to clarify. While Ranalli does explore the consequences of both the necessity and sufficiency interpretation, he’s correct in suggesting that I really intended the necessity claim (Ranalli 2020, fn.5). The reason I think it’s necessary for an agent to evaluate her worldview in the face of disagreement is because a dispute independent reason is needed for remaining steadfast in the face of worldview disagreement. Without such a dispute independent reason, the conciliationist can rightly accuse the individual who remains steadfast of begging the question against her opponent. In Section 4, then, I will only respond to Ranalli’s criticisms of the necessity interpretation.
- Cognitive Biases and the Normative Force of Epistemic Peerhood
Ranalli’s first challenge to my article is to point to various cognitive biases as potentially problematic in hindering attempts to accurately evaluate worldviews. However, this isn’t a problem that’s particularly unique to my argument. Such biases are a potential problem for other types of inquiry, and also of our ability to be epistemically rational more generally. Additionally, it’s fairly easy for me to build into my account the relevant intellectual virtues that would protect against such biases as a requirement for legitimate worldview evaluation. For instance, Ranalli believes that one such requirement for the type of worldview evaluation I recommend is open-mindedness. Indeed, something like this is what I had in mind and I’m happy to take it as a friendly amendment to my original formulation.
In discussing the normative implications of epistemic peerhood, Ranalli writes that:
[W]e sometimes think that our peers believe weird things without lacking confidence in their general capacity for rational evaluation. The recognition of peerhood doesn’t require a general tendency to take everything they believe seriously. Sometimes our peers make obvious mistakes, or make unobvious mistakes that are more easily recognized by one’s interlocuter than oneself. Indeed, this might be especially salient in the case of worldview disagreement, since worldview belief is predominantly a function of identity preservation and emotional regulation, rather than an unbiased evaluation of evidence. In this way, people who oppose our worldviews might be in a better position to find out what errors they contain (if any) than we are because of our tendency to evaluate those positions more closely connected to our identities in biased ways compared with our more ordinary beliefs (Ranalli 2020, 6)
But this is simply false. Here’s why: If I genuinely believe that, say, Chris is my epistemic peer about whether P, I cannot dismiss his belief that not-P as ‘weird’ or ‘obviously mistaken’. This cuts to the very heart of the matter in the epistemology of disagreement. The sceptical pressure generated by conciliationism occurs, in part, because I can’t (under pains of being epistemically irrational) dismiss my peer’s view as weird or obviously mistaken without a dispute independent reason for doing so. To do otherwise would be for me to beg the question against Chris. Indeed, part of the very reason why peer disagreement can be so troubling is that sometimes we discover people who we reasonably consider to be our epistemic peers to hold views that we consider quite strange.
- Objections to the Necessity of Theoretical Superiority Requirement
Ranalli argues that “open-mindedly examining the comparative theoretical superiority of one’s worldview is better understood as an epistemic ideal rather than a requirement of epistemic rationality in response to worldview disagreement” (2020, 7). With respect to the requirement to evaluate one’s worldview in the face of disagreement he writes that:
The problem is that this is just too intellectually demanding. If evaluating the theoretical superiority of your worldview W1 and your peer’s contrary worldview W2 in this sense were a requirement for you to rationally maintain W1, then almost no one except the Vulcans could rationally maintain worldviews. To so much as rationally maintain a worldview, one would have to undertake a certain kind of theoretical project that would be difficult and arduous for even the most astute, intelligent, learned, and virtuous among us (Ranalli 2020, 7-8).
So, the type of worldview evaluation required on my account “might be possible for some people but not many” (Ranalli 2020, 8). In light of this Ranalli also claims that:
Our normative principles should not be so demanding that most people couldn’t meet the principles’ demands. It’s just unrealistic that most people could do this, much less the person trying to make ends meet at the local grocery store. But it’s not unreasonable for them to have worldviews. The Necessity of Theoretical Superiority requirement seems to have the uncomfortable consequence that worldviews are the special reservation of the intellectually virtuous; of the epistemic elite, who have the knowledge and ability to undertake the theoretical project of theoretically examining worldviews so understood; something apparently required by epistemic rationality as per the Necessity of Theoretical Superiority requirement (Ranalli 2020, 8).
Notice that Ranalli is really making two distinct claims: (i) most people can’t meet the normative demands of my account; and (ii) in order for a normative principle to be true, it must be able to be satisfied by most people. I think (i) is true, or at least probably true. In any case, I won’t contest its truth here. The demands of worldview evaluation are indeed relatively high such that most people probably won’t have the time, energy, or ability to conduct such an evaluation. But unless (ii) is also true, this fact on its own tells us nothing about the plausibility of my argument. These ideas from Ranalli can also be standardized along the following lines:
Against Epistemic Elitism
(8) A normative principle is true if and only if it can be satisfied by most people.
(9) The Necessity of Theoretical Superiority cannot be satisfied by most people.
(10) The Necessity of Theoretical Superiority is false.
(10) follows from (8) and (9). And I have already agreed that (9) is true. The key to discovering whether this argument is sound, then, is whether (8) is true. However, it’s important to see that nowhere does Ranalli provide much by way of argument for (8). In more than one instance he argues for (9), but this is distinct from providing evidence for (8).
If our philosophical inquiry leads us to a place where we discover that the demands of epistemic rationality are quite high, then so be it. That certain things are unavailable to certain individuals need not imply elitism, at least not of a pernicious sort. For example, I’m not able to be an astronaut or professional ice hockey player, but this in itself doesn’t make these things illegitimate for those who can do them. I also worry that to lower the bar for epistemic rationality risks devaluing the pursuit of wisdom, truth, and knowledge. Why pursue these things if epistemic rationality is something so easy to come by? It’s also vital to remember that even if a person cannot or does not conduct worldview evaluation in the face of disagreement it doesn’t necessarily mean they are completely irrational to maintain their own worldview. Or at least this is not what I intended to imply in the original article. It may only mean that they are now less rational in their worldview, not irrational in holding it. Finally, even if one isn’t convinced by these remarks, the burden of proof to defend (8) is still on Ranalli.
A related but distinct worry is that the Necessity of Theoretical Superiority requirement implies scepticism (Ranalli 2020, 8). I take the following to be the most charitable way of standardizing Ranalli’s remarks:
(11) If a normative principle regarding X is true, then it won’t imply (widespread) scepticism about X.
(12) The Necessity of Theoretical Superiority requirement about worldviews implies (widespread) scepticism about worldviews.
(13) The Necessity of Theoretical Superiority is false.
(13) follows from (11) and (12). I think that (12) is true inasmuch as it just means that if the Necessity of Theoretical Superiority requirement is true, then many people will not be able to rationally hold their worldviews. Or, more accurately, many people will be less rational in maintaining their worldviews. If this is what Ranalli means by scepticism, then I accept that (12) is true. However, I reject (11) for at least two reasons. First, contra perhaps what is the majority view amongst professional philosophers, I do not take charges of scepticism to be an automatic defeater for a philosophical position. That a philosophical position entails scepticism does not show that the position in question is false. I won’t say more about this here, since doing so would take us too far afield. My point is just that I’m one of the (perhaps) few people who refuse to see scepticism as the bogeyman. Second, Ranalli appears to connect the charge of scepticism with (8). The fact that the Necessity of Theoretical Superiority requirement leads to widespread scepticism means that most people won’t be rational in holding a worldview (or will be less rational in holding it). But we’ve already seen that Ranalli doesn’t provide much by way of reason to accept (8). There might well be something to these consequences which hurt my argument, but as it stands Ranalli has not done nearly enough to show why these consequences are problematic.
- Future Epistemic Benefits
The final set of challenges that Ranalli raises against my argument are about whether examining the theoretical virtues of a worldview is sufficiently strong enough for epistemic rationality, along with questions about future epistemic benefits. He writes that:
Perhaps some epistemic benefits can make the retention of one’s attitude rational, but it’s not clear that theoretical virtue understanding is enough. Isn’t that just too weak? This worry is consistent with the idea that epistemic benefits are sometimes sufficient to rationalize belief, but that the kind of epistemic benefits that theoretical virtue understanding consists in is not good enough. Put generally: the kind of epistemic benefits on offer matter (Ranalli 2020, 12).
I’m not going to spend time addressing this worry for the simple reason that it seems to me that it wouldn’t be too difficult to add or emphasize that the worldview evaluation be truth-directed. And furthermore, it’s not difficult to stipulate that this orientation towards truth should not be directed at internal questions related to the worldview, but directed at whether the worldview is in fact true (i.e. we could simply remove the questions about truth regarding what Ranalli labels ‘higher-order beliefs’ about the worldview). I think this would help assuage Ranalli’s worries about whether theoretical understanding is sufficiently epistemic, etc.
Ranalli concludes with what I take to be the most challenging question to my argument. He says:
The final question asks what exactly is the epistemological relationship between one’s present belief in W in the face of deep disagreement and the future epistemic benefits which might accrue from such an attitude? Can future benefits fully justify a belief or is it only partial justification? Here’s an example to help better structure the question. Suppose W is false but S retains W because it has many more theoretical virtues over the competitor W* (perhaps it is maximally theoretically virtuous). In the distant future of S’s life, she learns that W* is actually correct, and suppose she wouldn’t have learned that W* is true, which implies that ~W, without sticking to her initial belief that W all these years. For perhaps she wouldn’t have attended the International Conference on W, where she learned that W* is true. In a twist, then, what made it rational for her to maintain W in response to her present disagreement over her worldview is the fact that she gets the good of true belief that W* in the future. It is the epistemic status of a different and future doxastic attitude that rationalizes her present false doxastic attitude. This picture just seems fishy and I invite Lougheed to explore the details (Ranalli 2020, 13).
When we zoom in on a simpler case, like my belief that p, that there’s gold buried under my apartment (a false belief), it’s very hard to see how I might be rational in maintaining that belief in the face of disagreement with my friend if only because I would have believed truly ~p later if I presently believe that p. The benefit of later believing truly that ~p simply looks on its face to be irrelevant to whether I’m rational in presently believing that p. So, my hope is that Lougheed can fill in the details for us here so that we can better understand the relationship between epistemic benefits and rationality (Ranalli 2020, 13).
I don’t think that I can respond in great detail to this worry. However, I’m going to aim at providing more information about what I had in mind with the hope that my view becomes clearer. One point I want to make is that I wasn’t intending to claim that epistemic benefits are always in the offing. Sometimes remaining steadfast in the face of disagreement may even lead to epistemic harm. Related to this is the idea that a person cannot arbitrarily remain steadfast in the face of disagreement by pointing to the mere logical possibility of some vague future epistemic benefits. The individual in question has to have good reason for thinking that remaining steadfast in the face of disagreement will yield said epistemic benefits. Part of the reason for shifting to worldviews in this article and away from disagreement about specific propositions (which is the focus of my book), is because I think that the epistemic benefits are easier to get when it comes to worldviews. Why? Because it’s easier to establish criteria by which to evaluate worldviews than individual isolated propositions. As stated above, I think the criteria I use can be amended to focus more on (non higher-order) truth. Furthermore, I take what I said about the criteria by which to evaluate worldviews to be the first word, not the last.
To use Ranalli’s term, something might still feel ‘fishy’ about this state of affairs. I suspect that this feeling is best diagnosed as a result of the distinction between synchronic epistemic rationality and diachronic epistemic rationality. In The Epistemic Benefits of Disagreement I argue that epistemologists are almost exclusively concerned with synchronic epistemic rationality perhaps at the cost of altogether ignoring diachronic epistemic reasons (Lougheed 2020b, 100-104; see also Lougheed and Simpson 2017). I further argued that an all-things-considered epistemic rationality would take into account both synchronic and diachronic epistemic reasons (Lougheed 2020b, 104-107). However, I’m now more sympathetic to the view that there may well be no all-things-considered epistemic perspective (Matheson 2015). If that’s the case, then my view may well feel ‘fishy’ from the synchronic perspective, the one that most epistemologists take. My argument does not make it synchronically epistemically rational to maintain one’s worldview in the face of disagreement. Rather it can make certain individuals diachronically epistemically rational to remain steadfast in certain cases of worldview disagreement. And, if there is such a perspective, it will also allow certain individuals to be all-things-considered epistemically rational.
Ranalli has pressed me in the right places. While I am happy to add to my view the need to be intellectually virtuous when evaluating worldviews, Ranalli does underestimate the normative force of epistemic peerhood. I should indeed be interpreted as making a claim about the necessity of evaluating one’s worldview in the face of peer disagreement. As it stands, Ranalli’s objections to the Necessity of Theoretical Superiority fail because he never really tells us why epistemic elitism or scepticism is bad. Though I think my account of worldview evaluation can be amended to account for Ranalli’s worries about whether it is sufficiently epistemic, his questions about future epistemic benefits are important. They point to the often-overlooked distinction between synchronic and diachronic epistemic reasons. More remains to be said about these two conceptions of rationality, whether they can ever come together, and how they relate to my argument.
Hughes, Nick. (2019). “Dilemmic Epistemology.” Synthese 196: 4059-4090.
Lougheed, Kirk. (2019). “Catherine Elgin on peerhood and the epistemic benefits of disagreement.” Synthese. Online First.
Lougheed, Kirk. (2020a). “The Epistemic Benefits of Worldview Disagreement.” Social Epistemology. Online First View.
Lougheed, Kirk. (2020b). The Epistemic Benefits of Disagreement. Switzerland: Springer.
Lougheed, Kirk and Robert Mark Simpson. (2017). “Indirect Epistemic Reasons and Religious Belief.” Religious Studies 53 (2): 151-169.
Matheson, Jonathan. (2015). “Disagreement and the Ethics of Belief.” In The Future of Social Epistemology: A Collective Vision, ed. James H. Collier, 139–148. USA: Rowman and Littlefield.
Ranalli, Chris. (2020). “Rationally Maintaining a Worldview.” Social Epistemology Review and Reply Collective 9 (11): 1-14.
 It’s also noteworthy that there’s some reason to hold that the existence of peer disagreement might actually help to combat (some) biases. See Lougheed 2020b, 71-73 for discussion.
 Ranalli also wonders why the average layperson needs to have a theoretically robust defense of their worldview if they can rely on relevant expert testimony (2020, 9). Nothing in my view entails scepticism about the reliability of testimony. The reason why this response doesn’t help is because it simply pushes the problem of worldview disagreement from laypersons back to experts. What should we believe when experts disagree with each other? No progress can be made by appealing to experts in this way.
 In the book I criticize Matheson’s position, but as I say I am now more open to the possibility that’s his suggestion might be correct. See Lougheed 2020b, 102-104.
 Another avenue worth exploring is based on the claim that there can be genuine dilemmas between different epistemic requirements. If there is all no all-things considered epistemic perspective, then maybe there is a genuine dilemma between what is rational to believe from the synchronic perspective versus the diachronic perspective in the face of disagreement. For more on epistemic dilemmas see Hughes 2019. For a brief discussion about epistemic dilemmas in this context see Lougheed 2019, 17-18.
 This article was made possible, in part, by funding from the Social Sciences and Humanities Research Council of Canada.
Categories: Critical Replies