Archives For context

Author Information: Robin McKenna, University of Liverpool, r.j.mckenna@liverpool.ac.uk.

McKenna, Robin. “McBride on Knowledge and Justification.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 53-59.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-417

Image by Ronan Shahnav via Flickr / Creative Commons

 

I would like to thank the editors of the Social Epistemology Review and Reply Collective for giving me the opportunity to review Mark McBride’s rich and rewarding book. To begin, I will give a—fairly high-level—overview of its contents. I will then raise some concerns and make some (mildly) critical comments.

Overview

The book is split into two parts. Part 1 concerns the issue of basic knowledge (and justification), whereas the second concerns (putative necessary) conditions on knowledge (specifically, conclusive reasons, sensitivity and safety conditions). We can start with Part 1. As McBride defines it, basic knowledge is “knowledge (or justification) which is immediate, in the sense that one’s justification for the known proposition doesn’t rest on any justification for believing other propositions” (p. 1).

Two central issues in Part 1 are (i) what, exactly, is wrong with Moore’s “proof” of the external world (Chapter 1) (ii) what, exactly, is wrong with inferences that yield “easy knowledge” (Chapters 2-3). Take these arguments, which for ease of reference I’ll call MOORE and EASY-K respectively:

MOORE:

(Visual appearance as of having hands).
1-M. I have hands.
2-M. If I have hands, an external world exists.
3-M. An external world exists.

EASY-K:

(Visual appearance as of a red table).
1-EK. The table is red.
2-EK. If the table is red, then it is not white with red lights shining on it.
3-EK. The table is not white with red lights shining on it.

It seems like a visual appearance as of having hands can give one knowledge of 1-M, and 2-M seems to be knowable a priori. But it seems wrong to hold that one can thereby come to know 3-M. (And mutatis mutandis for EASY-K and 3-EK).

I want to single out three of McBride’s claims about MOORE and EASY-K. First, it is commonly taken that “dogmatist” responses to MOORE (such as Pryor 2000) are at a disadvantage with respect to “conservative” responses (such as Wright 2004). The dogmatist holds that having a visual appearance as of hands provides immediate warrant for 1-M, whereas the conservative holds that one can have warrant for 1-M only if one has a prior entitlement to accept 3-M. Thus the dogmatist seems forced to accept that warrant can “transmit” from the premises of MOORE to the conclusion, whereas the conservative can deny that warrant transmission occurs.

In Chapter 1 McBride turns this on its head. First, he argues that, while a conservative such as Crispin Wright can maintain that the premises of MOORE don’t transmit “non-evidential” warrant to the conclusion, he must allow that “evidential” warrant does transmit from the premises to the conclusion. Second, he argues that Wright cannot avail himself of what McBride (following Davies 2004) takes to be a promising diagnosis of the real problem with MOORE. According to Martin Davies, MOORE is inadequate because it is of no use in the epistemic project of settling the question whether the external world exists. But, for Wright, there can be no such project, because the proposition that the external world exists is the “cornerstone” on which all epistemic projects are built.

Second, in Chapter 3 McBride seeks to show that the dogmatist can supplement Davies’ account of the problem with Moore’s proof in order to diagnose the problem with EASY-K. According to McBride, EASY-K is problematic not just in that it is of no use in settling the question whether the table is not white with red lights shining on it, but also in that there are all sorts of ways in which one could settle this question (e.g. by investigating the lighting sources surrounding the table thoroughly).

Thus, EASY-K is problematic in a way that MOORE isn’t: while one could avail oneself of a better argument for the conclusion of EASY-K, it is harder to see what sort of argument could improve on MOORE.

Third, while Part 1 is generally sympathetic to the dogmatist position, Chapter 5 argues that the dogmatist faces a more serious problem. The reader interested in the details of the argument should consult Chapter 5. Here, I just try to explain the gist. Say you endorse a closure principle on knowledge like this:

CLOSURE: Necessarily, if S knows p, competently deduces q from p, and thereby comes to believe q, while retaining knowledge of p throughout, then S knows q (p. 159).

It follows that, if one comes to know 1-EK (the table is red) by having an appearance as of a red table, then competently deduces 3-EK (the table is not white with red lights shining on it) from 1-EK while retaining knowledge of 1-EK, then one knows 3-EK. But—counter-intuitively—having an appearance as of a red table can lower the credence one ought to have in 3-EK (see pp. 119-20 for the reason why).

It therefore seems inarguable that, if you are in a position to know 3-EK after having the appearance, you must have been in a position to know the 3-EK prior to the appearance. So it seems like the conservative position must be right after all. In order for your appearance as of a red table to furnish knowledge that there is a red table you must have been in a position to know that the table was not white with red lights shining on it prior to having the appearance as of a red table.

The second part of McBride’s book concerns putative (necessary) conditions on knowledge, in particular conclusive reasons (Chapter 6), sensitivity (Chapter 7) and safety (Chapter 8). McBride dedicates a chapter to each condition; the book finishes with a (brief) application of safety to legal knowledge (Chapter 9). While most epistemologists tend to argue that either sensitivity or (exclusive) safety are a (necessary) condition on knowledge, McBride provides a (qualified) defense of both.

In the case of sensitivity, this is in part because, if sensitivity were a condition on knowledge, then—as Nozick (1981) famously held—CLOSURE would be false, and so the argument against dogmatism (about knowledge) in Chapter 5 would be disarmed. Because of the centrality of sensitivity to the argument in Part 1, and because the chapters on conclusive reasons and sensitivity revolve around similar issues, I focus on sensitivity in what follows.

Here is an initial statement of sensitivity:

SENSITIVITY: S knows p only if S sensitively believes p, where S sensitively believes p just in case, were p false, S would not believe p (p. 160).

Chapter 7 (on sensitivity) is largely concerned with rebutting an objection from John Hawthorne (2004) to the effect that the sensitivity theorist must also reject these two principles:

EQUIVALENCE: If you know a priori that p and q are equivalent and you know p, then you are in a position to know q.

DISTRIBUTION: If one knows p and q, then one is in a position to know p and to know q.

Suppose I have an appearance as of a zebra. So I know:

(1) That is a zebra.

By EQUIVALENCE I can know:

(2) That is a zebra and that is not a cleverly disguised mule.

So by DISTRIBUTION I can know:

(3) That is not a cleverly disguised mule.

But, by SENSITIVITY, while I can know (1), I can’t know (3) because, if I were looking at a cleverly disguised mule, I would still believe I was looking at a zebra. Hawthorne concludes that the sensitivity theorist must deny a range of plausible principles, not just CLOSURE.

McBride’s basic response is that, while SENSITIVITY is problematic as stated, it can be modified in such a way that the sensitivity-theorist can deny EQUIVALENCE but keep DISTRIBUTION. More importantly, this rejection of EQUIVALENCE can be motivated on the grounds that initially motivate SENSITIVITY. Put roughly, the idea is that simple conjunctions like (4) already cause problems for SENSITIVITY:

(4) I have a headache and I have all my limbs.

Imagine you form the belief in (4) purely from your evidence of having a headache (and don’t worry about how this might be possible). While you clearly don’t know (4), your belief does satisfy SENSITIVITY, because, if (4) were false, you wouldn’t still believe it (if you didn’t have a headache, you wouldn’t believe you did, and so you wouldn’t believe (4)).

The underlying problem is that SENSITIVITY tells you to go the nearest possible world in which the relevant belief is false and asks what you believe there, but a conjunctive belief is false so long as one of the conjuncts is false, and it might be that one of the conjuncts is false in a nearby possible world, whereas the other is false in a more distant possible world. So the sensitivity theorist needs to restrict SENSITIVITY to atomic propositions and add a new condition for conjunctive propositions:

SENSITIVITY*: If p is a conjunctive proposition, S knows p only if S believes each of the conjuncts of p sensitively (p. 167).

If we make this modification, the sensitivity theorist now has an independent reason to reject EQUIVALENCE, but is free to accept DISTRIBUTION.

Critical Discussion

While this only touches on the wealth of topics discussed in McBride’s book, I will now move on to the critical discussion. I will start by registering two general issues about the book. I will then develop two criticisms in a little more length, one for each part of the book.

First, while the book makes compelling reading for those already versed in the literatures on transmission failure, easy knowledge and modal conditions on knowledge, the central problematics are rarely motivated at any length. Moreover, while McBride does draw numerous (substantive) connections between the chapters, the book lacks a unifying thesis. All this to say: This is maybe more of a book for the expert than the novice. But the expert will find a wealth of interesting material to chew over.

Second, readers of the Collective might find the individualism of McBride’s approach striking. McBride is almost exclusively concerned with the epistemic statuses of individuals’ beliefs, where those beliefs are formed through simple processes like perception and logical inference. The one part of the book that does gesture in a more social direction (McBride’s discussion of epistemic projects, and the dialectical contexts in which they are carried out) is suggestive, but isn’t developed in much detail.

Turning now to more substantive criticisms, in Part 1 McBride leans heavily on Davies’ solution to the problem with MOORE. I want to make two comments here. First, it is natural to interpret Davies’ solution as an inchoate form of contextualism (DeRose 1995; Lewis 1996): whether MOORE (and EASY-K?) transmits warrant to its conclusion depends on the context in which one runs the inference, in particular, the project in which one is engaged.

This raises a host of questions. For example: does McBride hold that, if we keep the context (project) fixed, no transmission failure occurs? That is: if we’re working with the (easier) project of deciding what to believe, does an instance of MOORE transmit warrant from premises to conclusion? If so, then if we’re working with the (harder) project of settling the question, does an instance of MOORE fail to transmit warrant? (This would fit with the more general contextualist line in response to the skeptical problem, so this is only a request for clarification).

Second, and more importantly, we need to distinguish between the project of fully settling the question whether p and the project of partially settling the question whether p. Let’s grant McBride (and Davies) that someone who runs through an instance of MOORE has not fully settled the question whether there is an external world. But why think that—at least by the dogmatist’s lights—they haven’t partially settled the question? If dogmatism is true, then having the appearance as of a hand provides immediate warrant for believing that one has a hand, and so, via MOORE, for believing that there is an external world.

McBride (like many others) finds this conclusion unpalatable, and he invokes the distinction between the project of deciding what to believe and the project of settling the question in order to avoid it. But this distinction is overly simplistic. We can settle questions for different purposes, and with different degrees of stability (cf. “the matter is settled for all practical purposes”). The dogmatist seems forced to allow that MOORE is perfectly good for settling the question of whether there is an external world for a range of projects, not just one.

(I have a parallel worry about the solution to the problem of easy knowledge. Let’s grant McBride that one problem with EASY-K is that there are far better ways of trying to establish that the table is not white but bathed in red light. But why think that—at least by the dogmatist’s lights—it isn’t a way of trying to establish this? To point out that there are better ways of establishing a conclusion is not yet to show that this particular way is no way at all of establishing the conclusion).

Finally, in his response to Hawthorne’s objection to the sensitivity theorist McBride is at pains to show that his modification of SENSITIVITY isn’t ad hoc. To my mind, he does an excellent job of showing that the sensitivity theorist should reject EQUIVALENCE for reasons entirely independent of Hawthorne’s objection.

This suggests (at least to me) that the problem is not one of ad hocness, but rather that sensitivity theorists are forced to endorse a wide range of what Keith DeRose (1995) calls “abominable conjunctions” (cf. “I know that I have hands, but I don’t know that I’m not a handless brain in a vat”). DeRose’s own response to this problem is to embed something like SENSITIVITY in a contextualist theory of knowledge attributions. DeRose proposes the following “rule”:

Rule of Sensitivity: When it’s asserted that S knows (or doesn’t know) p, then, if necessary, enlarge the sphere of epistemically relevant worlds so that it at includes the closest worlds in which p is false (cf 1995, 37).

His idea is that, when the question of whether S knows p becomes a topic of conversation, we expand the range of worlds in which S’s belief must be sensitive. Imagine I assert “I know that I have hands”. In order for this assertion to be true, it must be the case that, if I didn’t have hands, I wouldn’t believe that I did.

But now imagine I assert “I know that I’m not a handless brain in a vat”. In order for this new assertion to be true, it must be the case that, if I were a handless brain in a vat, I wouldn’t believe that I wasn’t. Plausibly, this will not be the case, so I can’t truly assert “I know that I’m not a handless brain in a vat”. But no abominable conjunction results, because I can no longer truly assert “I know that I have hands” either.

My suggestion is that, if McBride were to adopt DeRose’s contextualist machinery, he would not only have a way of responding to the problem of abominable conjunctions, but also an interesting modification to DeRose’s “rule of sensitivity”.

For note that DeRose’s rule seems subject to the same problem McBride sees with SENSITIVITY: when I assert “I have a headache and I have all my limbs” we only need to expand the range of worlds to include worlds in which I don’t have a headache, and so my assertion will remain true in the updated context created by my assertion. Further, adopting this suggestion would furnish another link between Part 1 and Part 2: solving the problem of basic knowledge and formulating a satisfactory sensitivity condition both require adopting a contextualist theory of knowledge attributions.

Contact details: r.j.mckenna@liverpool.ac.uk

References

Davies, Martin. 2004. ‘Epistemic Entitlement, Warrant Transmission and Easy Knowledge’. Aristotelian Society Supplementary Volume 78 (1): 213–245.

DeRose, Keith. 1995. ‘Solving the Skeptical Problem’. Philosophical Review 104 (1): 1–52.

Hawthorne, John. 2004. Knowledge and Lotteries. Oxford University Press.

Lewis, David. 1996. ‘Elusive Knowledge’. Australasian Journal of Philosophy 74 (4): 549–67.

Nozick, Robert. 1981. Philosophical Explanations. Harvard University Press.

Pryor, James. 2000. ‘The Skeptic and the Dogmatist’. Noûs 34 (4): 517–549.

Wright, Crispin. 2004. ‘Warrant for Nothing (and Foundations for Free)?’ Aristotelian Society Supplementary Volume 78 (1): 167–212.

Author Information: Moti Mizrahi, Florida Institute of Technology, mmizrahi@fit.edu

Mizrahi, Moti. “Weak Scientism Defended Once More.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 41-50.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yx

See also:

One of Galileo’s original compasses, on display at the Museo Galileo, a feature of the Instituto e Museo di Storia della Scienza in Florence, Italy.
Image by Anders Sandberg via Flickr / Creative Commons

 

Bernard Wills (2018) joins Christopher Brown (2017, 2018) in criticizing my defense of Weak Scientism (Mizrahi 2017a, 2017b, 2018a). Unfortunately, it seems that Wills did not read my latest defense of Weak Scientism carefully, nor does he cite any of the other papers in my exchange with Brown. For he attributes to me the view that “other disciplines in the humanities [in addition to philosophy] do not produce knowledge” (Wills 2018, 18).

Of course, this is not my view and I affirm no such thing, contrary to what Wills seems to think. I find it hard to explain how Wills could have made this mistake, given that he goes on to quote me as follows: “Scientific knowledge can be said to be qualitatively better than non-scientific knowledge insofar as such knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge” (Mizrahi 2018a, 7; quoted in Wills 2018, 18).

Clearly, the claim ‘Scientific knowledge is better than non-scientific knowledge’ entails that there is non-scientific knowledge. If the view I defend entails that there is non-scientific knowledge, then it cannot also be my view that “science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18).

Even if he somehow missed this simple logical point, reading the other papers in my exchange with Brown should have made it clear to Wills that I do not deny the production of knowledge by non-scientific disciplines. In fact, I explicitly state that “science produces scientific knowledge, mathematics produces mathematical knowledge, philosophy produces philosophical knowledge, and so on” (Mizrahi 2017a, 353). Even in my latest reply to Brown, which is the only paper from my entire exchange with Brown that Wills cites, I explicitly state that, if Weak Scientism is true, then “philosophical knowledge would be inferior to scientific knowledge both quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success)” (Mizrahi 2018a, 8).

If philosophical knowledge is quantitatively and qualitatively inferior to scientific knowledge, then it follows that there is philosophical knowledge. For this reason, only a rather careless reader could attribute to me the view that “other disciplines in the humanities [in addition to philosophy] do not produce knowledge” (Wills 2018, 18).

There Must Be Some Misunderstanding

Right from the start, then, Wills gets Weak Scientism wrong, even though he later writes that, according to Weak Scientism, “there may be knowledge of some sort outside of the sciences” (Wills 2018, 18). He says that he will ignore the quantitative claim of Weak Scientism and focus “on the qualitative question and particularly on the claim that science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18). Wills can focus on whatever he wants, of course, but that is not Weak Scientism.

Weak Scientism is not the view that only science produces real knowledge; that is Strong Scientism (Mizrahi 2017a, 353). Rather, Weak Scientism is the view that, “Of all the knowledge we have [i.e., there is knowledge other than scientific knowledge], scientific knowledge is the best knowledge” (Mizrahi 2017a, 354). In other words, scientific knowledge “is simply the best; better than all the rest” (Mizrahi 2017b, 20). Wills’ criticism, then, misses the mark completely. That is, it cannot be a criticism against Weak Scientism, since Weak Scientism is not the view that “science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18).

Although he deems the quantitative superiority of scientific knowledge over non-scientific knowledge “a tangential point,” and says that he will not spend time on it, Wills (2018, 18) remarks that “A German professor once told [him] that in the first half of the 20th Century there were 40,000 monographs on Franz Kafka alone!” Presumably, Wills’ point is that research output in literature exceeds that of scientific disciplines. Instead of relying on gut feelings and hearsay, Wills should have done the required research in order to determine whether scholarly output in literature really does exceed the research output of scientific disciplines.

If we look at the Scopus database, using the data and visualization tools provided by Scimago Journal & Country Rank, we can see that research output in a natural science like physics and a social science like psychology far exceeds research output in humanistic disciplines like literature and philosophy. On average, psychology has produced 15,000 more publications per year than either literature or philosophy between the years 1999 and 2017. Likewise, on average, physics has produced 54,000 more publications per year than either literature or philosophy between the years 1999 and 2017 (Figure 1). 

Figure 1. Research output in Literature, Philosophy, Physics, and Psychology from 1999 to 2017 (Source: Scimago Journal & Country Rank)

Contrary to what Wills seems to think or what his unnamed German professor may have told him, then, it is not the case that literary scholars produce more work on Shakespeare or Kafka alone than physicists or psychologists produce. The data from the Scopus database show that, on average, it takes literature and philosophy almost two decades to produce what psychology produces in two years or what physics produces in a single year (Mizrahi 2017a, 357-359).

In fact, using JSTOR Data for Research, we can check Wills’ number, as reported to him by an unnamed German professor, to find out that there are 13,666 publications (i.e., journal articles, books, reports, and pamphlets) on Franz Kafka from 1859 to 2018 in the JSTOR database. Clearly, that is not even close to “40,000 monographs on Franz Kafka alone” in the first half of the 20th Century (Wills 2018, 18). By comparison, as of May 22, 2018, the JSTOR database contains more publications on the Standard Model in physics and the theory of conditioning in behavioral psychology than on Franz Kafka or William Shakespeare (Table 1).

Table 1. Search results for ‘Standard Model’, ‘Conditioning’, ‘William Shakespeare’, and ‘Franz Kafka’ in the JSTOR database as a percentage of the total number of publications, n = 12,633,298 (Source: JSTOR Data for Research)

  Number of Publications Percentage of JSTOR corpus
Standard Model 971,968 7.69%
Conditioning 121,219 0.95%
William Shakespeare 93,700 0.74%
Franz Kafka 13,667 0.1%

Similar results can be obtained from Google Books Ngram Viewer when we compare published work on Shakespeare, which Wills thinks exceeds all published work in other disciplines, for he says that “Shakespeare scholars have all of us beat” (Wills 2018, 18), with published work on a contemporary of Shakespeare (1564-1616) from another field of study, namely, Galileo (1564-1642). As we can see from Figure 2, from 1700 to 2000, ‘Galileo’ consistently appears in more books than ‘William Shakespeare’ does.

Figure 2. Google Books results for ‘William Shakespeare’ and ‘Galileo’ from 1700 to 2000 (Source: Google Books Ngram Viewer)

Racking Up the Fallacies

Wills continues to argue fallaciously when he resorts to what appears to be a fallacious ad hominem attack against me. He asks (rhetorically?), “Is Mr. Mizrahi producing an argument or a mere rationalization of his privilege?” (Wills 2018, 19) It is not clear to me what sort of “privilege” Wills wants to claim that I have, or why he accuses me of colonialism and sexism, since he provides no arguments for these outrageous charges. Moreover, I do not see how this is at all relevant to Weak Scientism. Even if I am somehow “privileged” (whatever Wills means by that), Weak Scientism is either true or false regardless.

After all, I take it that Wills would not doubt his physician’s diagnoses just because he or she is “privileged” for working at a hospital. Whether his physician is “privileged” for working at a hospital has nothing to do with the accuracy of his or her diagnoses. For these reasons, Wills’ ad hominem is fallacious (as opposed to a legitimate ad hominem as a rebuttal to an argument from authority, see Mizrahi 2010). I think that SERRC readers will be better served if we focus on the ideas under discussion, specifically, Weak Scientism, not the people who discuss them.

Speaking of privilege and sexism, however, it might be worth noting that, throughout his paper, Wills refers to me as ‘Mr. Mizrahi’ (rather than ‘Dr. Mizrahi’ or simply ‘Mizrahi’, as is the norm in academic publications), and that he has misspelled my name on more than one occasion (Wills 2018, 18, 22, 24). Studies suggest that addressing female doctors with ‘Ms.’ or ‘Mrs.’ rather than ‘Dr.’ might reveal gender bias (see, e.g., Files et al. 2017). Perhaps forms of address reveal not only gender bias but also ethnic or racial bias when people with non-white or “foreign” names are addressed as Mr. (or Ms.) rather than Dr. (Erlenbusch 2018).

Aside from unsubstantiated claims about the amount of research produced by literary scholars, fallacious appeals to the alleged authority of unnamed German professors, and fallacious ad hominem attacks, does Wills offer any good arguments against Weak Scientism? He spends most of his paper (pages 19-22) trying to show that there is knowledge other than scientific knowledge, such as knowledge produced in the fields of “Law and Music Theory” (Wills 2018, 20). This, however, does nothing at all to undermine Weak Scientism. For, as mentioned above, Weak Scientism is the view that scientific knowledge is superior to non-scientific knowledge, which means that there is non-scientific knowledge; it’s just not as good as scientific knowledge (Mizrahi 2017a, 356).

The Core of His Concept

Wills finally gets to Weak Scientism on the penultimate page of his paper. His main objection against Weak Scientism seems to be that it is not clear to him how scientific knowledge is supposed to be better than non-scientific knowledge. For instance, he asks, “Better in what context? By what standard of value?” (Wills 2018, 23) Earlier he also says that he is not sure what are the “certain relevant respect” in which scientific knowledge is superior to non-scientific knowledge (Wills 2018, 18).

Unfortunately, this shows that Wills either has not read the other papers in my exchange with Brown or at least has not read them carefully. For, starting with my first defense of Weak Scientism (2017a), I explain in great detail the ways in which scientific knowledge is better than non-scientific knowledge. Briefly, scientific knowledge is quantitatively better than non-scientific knowledge in terms of research output (i.e., more publications) and research impact (i.e., more citations). Scientific knowledge is qualitatively better than non-scientific knowledge in terms of explanatory, instrumental, and predictive success (Mizrahi 2017a, 364; Mizrahi 2017b, 11).

Wills tries to challenge the claim that scientific knowledge is quantitatively better than non-scientific knowledge by exclaiming, “Does science produce more knowledge that [sic] anything else? Hardly” (Wills 2018, 23). He appeals to Augustine’s idea that one “can produce a potential infinity of knowledge simply by reflecting recursively on the fact of [one’s] own existence” (Wills 2018, 23). In response, I would like to borrow a phrase from Brown (2018, 30): “good luck getting that published!”

Seriously, though, the point is that Weak Scientism is a thesis about academic knowledge or research. In terms of research output, scientific disciplines outperform non-scientific disciplines (see Figure 1 and Table 1 above; Mizrahi 2017a, 357-359; Mizrahi 2018a, 20-21). Besides, just as “recursive processes can extend our knowledge indefinitely in the field of mathematics,” they can also extend our knowledge in other fields as well, including scientific fields. That is, one “can produce a potential infinity of knowledge simply by reflecting recursively on the” (Wills 2018, 23) Standard Model in physics or any other scientific theory and/or finding. For this reason, Wills’ objection does nothing at all to undermine Weak Scientism.

Wills (2018, 23) tries to problematize the notions of explanatory, instrumental, and predictive success in an attempt to undermine the claim that scientific knowledge is qualitatively better than non-scientific knowledge in terms of explanatory, instrumental, and predictive success. But it seems that he misunderstands these notions as they apply to the scientism debate.

As far as instrumental success is concerned, Wills (2018, 23) asks, “Does science have (taken in bulk) more instrumental success than other knowledge forms? How would you even count given that craft knowledge has roughly 3 million-year head start?” Even if it is true that “craft knowledge has roughly 3 million-year head start,” it is irrelevant to whether Weak Scientism is true or false. This is because Weak Scientism is a thesis about academic knowledge or research produced by academic fields of study (Mizrahi 2017a, 356; Mizrahi 2017b, 11; Mizrahi 2018a, 12).

Solving the Problem and Explaining the Issue

As far as explanatory success is concerned, Wills (2018, 23) writes, “Is science more successful at explanation? Hardly, if science could solve problems in literature or history then these fields would not even exist.” There are a couple of problems with this objection. First, explaining and problem solving are not the same thing (Mizrahi and Buckwalter 2014). Second, what makes scientific explanations good explanations are the good-making properties that are supposed to make all explanations (both scientific and non-scientific) good explanations, namely, unification, coherence, simplicity, and testability (Mizrahi 2017a, 360-362; Mizrahi 2017b, 19-20; Mizrahi 2018a, 17).

I have already made this point several times in my replies to Brown, which Wills does not cite, namely, that Inference to the Best Explanation (IBE) is used in both scientific and non-scientific contexts (Mizrahi 2017a, 362). That is, “IBE is everywhere” (Mizrahi 2017b, 20). It’s just that scientific IBEs are better than non-scientific IBEs because they exhibit more of (and to a greater extent) the aforementioned properties that make any explanation a good explanation (Mizrahi 2018b).

As far as predictive success is concerned, Wills (2018, 23) asks, “Does science make more true predictions? Again how would you even count given that for millions of years, human beings survived by making hundreds of true predictions daily?” There are a few problems with this objection as well. First, even if it is true that “for millions of years, human beings survived by making hundreds of true predictions daily,” it is irrelevant to whether Weak Scientism is true or false, since Weak Scientism is a thesis about academic knowledge or research produced by academic fields of study (Mizrahi 2017a, 356; Mizrahi 2017b, 11; Mizrahi 2018a, 12).

Second, contrary to what Wills (2018, 24) seems to think, testing predictions in science is not simply a matter of making assertions and then checking to see if they are true. For one thing, a prediction is not simply an assertion, but rather a consequence that follows from a hypothesis plus auxiliary hypotheses (Mizrahi 2015). For another, a prediction needs to be novel such that we would not expect it to be the case except from the vantage point of the theory that we are testing (Mizrahi 2012).

As I have advised Brown (Mizrahi 2018, 17), I would also advise Wills to consult logic and reasoning textbooks, not because they provide support for the claim that “science is instrumentally successful, explanatory and makes true predictions,” as Wills (2018, 23) erroneously thinks, but because they discuss hypothesis testing in science. For Wills’ (2018, 24) remark about Joyce scholars suggests a failure to understand how hypotheses are tested in science.

Third, like Brown (2017, 49), Wills (2018, 23) admits that, just like science, philosophy is in the explanation business. For Wills (2018, 23) says that, “certainty, instrumental success, utilitarian value, predictive power and explanation all exist elsewhere in ways that are often not directly commensurable with the way they exist in science” (emphasis added). But if distinct fields of study have the same aim (i.e., to explain), then their products (i.e., explanations) can be evaluated with respect to similar criteria, such as unification, coherence, simplicity, and testability (Mizrahi 2017a, 360-362; Mizrahi 2017b, 19-20; Mizrahi 2018a, 17).

In other words, there is no incommensurability here, as Wills seems to think, insofar as both science and philosophy produce explanations and those explanations must exhibit the same good-making properties that make all explanations good explanations (Mizrahi 2018a, 17; 2018b).

“You Passed the Test!”

If Wills (2018, 24) wants to suggest that philosophers should be “testing their assertions in the ways peculiar to their disciplines,” then I would agree. However, “testing” does not simply mean making assertions and then checking to see if they are true, as Wills seems to think. After all, how would one check to see if assertions about theoretical entities are true? To test a hypothesis properly, one must derive a consequence from it (plus auxiliary assumptions) that would be observed only if the hypothesis (plus the auxiliary assumptions) is true.

Observations and/or experimentation would then indicate to one whether the consequence obtains or not (Mizrahi 2012). Of course, some philosophers have been doing just that for some time now (Knobe 2017). For instance, some experimental philosophers test hypotheses about the alleged intuitiveness of philosophical ideas and responses to thought experiments (see, e.g., Kissinger-Knox et al. 2018). I welcome such empirical work in philosophy.

Contrary to what Wills (2018, 19) seems to think, then, my aim is not to antagonize philosophers. Rather, my aim is to reform philosophy. In particular, as I have suggested in my recent reply to Brown (Mizrahi 2018a, 22), I think that philosophy would benefit from adopting not only the experimental methods of the cognitive and social sciences, as experimental philosophers have done, but also the methods of data science, such as data mining and corpus analysis (see, e.g., Ashton and Mizrahi 2018a and 2018b).

Indeed, the XPhi Replicability Project recently published a report on replication studies of 40 experimental studies according to which experimental studies “successfully replicated about 70% of the time” (Cova et al. 2018). With such a success rate, one could argue that the empirical revolution in philosophy is well under way (see also Knobe 2015). Resistance is futile!

Contact details: mmizrahi@fit.edu

References

Ashton, Z., and Mizrahi, M. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the ‘Received Wisdom’ About Armchair Philosophy.” Erkenntnis 83, no. 3 (2018a): 595-612.

Ashton, Z., and Mizrahi, M. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018b): 58-70.

Brown, C. M. “Some Objections to Moti Mizrahi’s ‘What’s So Bad About Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 42-54.

Brown, C. M. “Defending Some Objections to Moti Mizrahi’s Arguments for Weak Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 1-35.

Cova, Florian, Brent Strickland, Angela G Abatista, Aurélien Allard, James Andow, Mario Attie, James Beebe, et al. “Estimating the Reproducibility of Experimental Philosophy.” PsyArXiv, April 21, 2018. doi:10.17605/OSF.IO/SXDAH.

Erlenbusch, V. “Being a Foreigner in Philosophy: A Taxonomy.” Hypatia 33, no. 2 (2018): 307-324.

Files, J. A., Mayer, A. P., Ko, M. G., Friedrich, P., Jenkins, M., Bryan, M. J., Vegunta, S., Wittich, C. M., Lyle, M. A., Melikian, R., Duston, T., Chang, Y. H., Hayes, S. M. “Speaker Introductions at Internal Medicine Grand Rounds: Forms of Address Reveal Gender Bias.” Journal of Women’s Health 26, no. 5 (2017): 413-419.

Google. “Ngram Viewer.” Google Books Ngram Viewer. Accessed on May 21, 2018. https://books.google.com/ngrams.

JSTOR. “Create a Dataset.” JSTOR Data for Research. Accessed on May 22, 2018. https://www.jstor.org/dfr/.

Kissinger-Knox, A., Aragon, P., and Mizrahi, M. “Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness.” Acta Analytica 33, no. 2 (2018): 161-179.

Knobe, J. “Experimental Philosophy.” Philosophy Compass 2, no. 1 (2007): 81-92.

Knobe, J. “Philosophers are Doing Something Different Now: Quantitative Data.” Cognition 135 (2015): 36-38.

Mizrahi, M. “Take My Advice–I Am Not Following It: Ad Hominem Arguments as Legitimate Rebuttals to Appeals to Authority.” Informal Logic 30, no. 4 (2010): 435-456.

Mizrahi, M. “Why the Ultimate Argument for Scientific Realism Ultimately Fails.” Studies in History and Philosophy of Science Part A 43, no. 1 (2012): 132-138.

Mizrahi, M. “Don’t Believe the Hype: Why Should Philosophical Theories Yield to Intuitions?” Teorema: International Journal of Philosophy 34, no. 3 (2015): 141-158.

Mizrahi, M. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, M. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, M. “More in Defense of Weak Scientism: Another Reply to Brown.” Social Epistemology Review and Reply Collective 7, no. 4 (2018a): 7-25.

Mizrahi, M. “The ‘Positive Argument’ for Constructive Empiricism and Inference to the Best Explanation.” Journal for General Philosophy of Science (2018b): https://doi.org/10.1007/s10838-018-9414-3.

Mizrahi, M. and Buckwalter, W. “The Role of Justification in the Ordinary Concept of Scientific Progress.” Journal for General Philosophy of Science 45, no. 1 (2014): 151-166.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on May 20, 2018. http://www.scimagojr.com/mapgen.php?maptype=bc&country=US&y=citd.

Wills, B. “Why Mizrahi Needs to Replace Weak Scientism With an Even Weaker Scientism.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 18-24.

Author Information: Manuel Padilla Cruz, University of Seville, mpadillacruz@us.es

Cruz, Manuel Padilla. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 39-50.

Please refer to:

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3RS

Contestants from the 2013 Scripps National Spelling Bee. Image from Scripps National Spelling Bee, via Flickr / Creative Commons

 

Derek Anderson (2017a) has recently differentiated conceptual competence injustice and characterised it as the wrong done when, on the grounds of the vocabulary used in interaction, a person is believed not to have a sophisticated or rich conceptual repertoire. His most interesting, insightful and illuminating work induced me to propose incorporating this notion to the field of linguistic pragmatics as a way of conceptualising an undesired and unexpected perlocutionary effect: attribution of lower level of communicative or linguistic competence. These may be drawn from a perception of seemingly poor performance stemming from lack of the words necessary to refer to specific elements of reality or misuse of the adequate ones (Padilla Cruz 2017a).

Relying on the cognitive pragmatic framework of relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004), I also argued that such perlocutionary effect would be an unfortunate by-product of the constant tendency to search for the optimal relevance of intentional stimuli like single utterances or longer stretches of discourse. More specifically, while aiming for maximum cognitive gain in exchange for a reasonable amount of cognitive effort, the human mind may activate or access assumptions about a language user’s linguistic or communicative performance, and feed them as implicated premises into inferential computations.

Although those assumptions might not really have been intended by the language user, they are made manifest by her[1] behaviour and may be exploited in inference, even if at the hearer’s sole responsibility and risk. Those assumptions are weak implicated premises and their interaction with other mentally stored information yields weakly implicated conclusions (Sperber and Wilson 1986/1995; Wilson and Sperber 2004). Since their content pertains to the speaker’s behaviour, they are behavioural implicatures (Jary 2013); since they negatively impact on an individual’s reputation as a language user, they turn out to be detrimental implicatures (Jary 1998).

My proposal about the benefits of the notion of conceptual competence injustice to linguistic pragmatics was immediately replied by Anderson (2017b). He considers that the intention underlying my comment on his work was “[…] to model conceptual competence injustice within relevance theory” and points out that my proposal “[…] must be tempered with the proper understanding of that phenomenon as a structural injustice” (Anderson 2017b: 36; emphasis in the original). Furthermore, he also claims that relevance theory “[…] does not intrinsically have the resources to identify instances of conceptual competence injustice” (Anderson 2017b: 36).

In what follows, I purport to clarify two issues. Firstly, my suggestion to incorporate conceptual competence injustice into linguistic pragmatics necessarily relies on a much broader, more general and loosened understanding of this notion. Even if such an understanding deprives it of some of its essential, defining conditions –namely, existence of different social identities and of matrices of domination– it may somehow capture the ontology of the unexpected effects that communicative performance may result in: an unfair appraisal of capacities.

Secondly, my intention when commenting on Anderson’s (2017a) work was not actually to model conceptual competence injustice within relevance theory, but to show that this pragmatic framework is well equipped and most appropriate in order to account for the cognitive processes and the reasons underlying the unfortunate negative effects that may be alluded to with the notion I am advocating for. Therefore, I will argue that relevance theory does in fact have the resources to explain why some injustices stemming from communicative performance may originate. To conclude, I will elaborate on the factors why wrong ascriptions of conceptual and lexical competence may be made.

What Is Conceptual Competence Injustice

As a sub-type of epistemic injustice (Fricker 2007), conceptual competence injustice arises in scenarios where there are privileged epistemic agents who (i) are prejudiced against members of specific social groups, identities or minorities, and (ii) exert power as a way of oppression. Such agents make “[…] false judgments of incompetence [which] function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity” (Anderson 2017b: 36). Therefore, conceptual competence injustice is a way of denigrating individuals as knowers of specific domains of reality and ultimately disempowering, discriminating and excluding them, so it “[…] is a form of epistemic oppression […]” (Anderson 2017b: 36).

Lack or misuse of vocabulary may result in wronging if hearers conclude that certain concepts denoting specific elements of reality –objects, animals, actions, events, etc.– are not available to particular speakers or that they have erroneously mapped those concepts onto lexical items. When this happens, speakers’ conceptualising and lexical capacities could be deemed to be below alleged or actual standards. Since lexical competence is one of the pillars of communicative competence (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995), that judgement could contribute to downgrading speakers in an alleged scale of communicative competence and, consequently, to regarding them as partially or fully incompetent.

According to Medina (2011), competence is a comparative and contrastive property. On the one hand, skilfulness in some domain may be compared to that in (an)other domain(s), so a person may be very skilled in areas like languages, drawing, football, etc., but not in others like mathematics, oil painting, basketball, etc. On the other hand, knowledge of and abilities in some matters may be greater or lesser than those of other individuals. Competence, moreover, may be characterised as gradual and context-dependent. Degree of competence –i.e. its depth and width, so to say– normally increases because of age, maturity, personal circumstances and experience, or factors such as instruction and subsequent learning, needs, interests, motivation, etc. In turn, the way in which competence surfaces may be affected by a variety of intertwined factors, which include (Mustajoki 2012; Padilla Cruz 2017b).

Factors Affecting Competence in Communication

Internal factors –i.e. person-related– among which feature:

Relatively stable factors, such as (i) other knowledge and abilities, regardless of their actual relatedness to a particular competence, and (ii) cognitive styles –i.e. patterns of accessing and using knowledge items, among which are concepts and words used to name them.

Relatively unstable factors, such as (i) psychological states like nervousness, concentration, absent-mindedness, emotional override, or simply experiencing feelings like happiness, sadness, depression, etc.; (ii) physiological conditions like tiredness, drowsiness, drunkenness, etc., or (iii) performance of actions necessary for physiological functions like swallowing, sipping, sneezing, etc. These may facilitate or hinder access to and usage of knowledge items including concepts and words.

External –i.e. situation-related– factors, which encompass (i) the spatio-temporal circumstances where encounters take place, and (ii) the social relations with other participants in an encounter. For instance, haste, urgency or (un)familiarity with a setting may ease or impede access to and usage of knowledge items, as may experiencing social distance and/or more or less power with respect to another individual (Brown and Levinson 1987).

While ‘social distance’ refers to (un)acquaintance with other people and (dis)similarity with them as a result of perceptions of membership to a social group, ‘power’ does not simply allude to the possibility of imposing upon others and conditioning their behaviour as a consequence of differing positions in a particular hierarchy within a specific social institution. ‘Power’ also refers to the likelihood to impose upon other people owing to perceived or supposed expertise in a field –i.e. expert power, like that exerted by, for instance, a professor over students– or to admiration of diverse personal attributes –i.e. referent power, like that exerted by, for example, a pop idol over fans (Spencer-Oatey 1996).

There Must Be Some Misunderstanding

Conceptualising capacities, conceptual inventories and lexical competence also partake of the four features listed above: gradualness, comparativeness, contrastiveness and context-dependence. Needless to say, all three of them obviously increase as a consequence of growth and exposure to or participation in a plethora of situations and events, among which education or training are fundamental. Conceptualising capacities and lexical competence may be more or less developed or accurate than other abilities, among which are the other sub-competences upon which communicative competence depends –i.e. phonetics, morphology, syntax and pragmatics (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995).

Additionally, conceptual inventories enabling lexical performance may be rather complex in some domains but not in others –e.g. a person may store many concepts and possess a rich vocabulary pertaining to, for instance, linguistics, but lack or have rudimentary ones about sports. Finally, lexical competence may appear to be higher or lower than that of other individuals under specific spatio-temporal and social circumstances, or because of the influence of the aforesaid psychological and physiological factors, or actions performed while speaking.

Apparent knowledge and usage of general or domain-specific vocabulary may be assessed and compared to those of other people, but performance may be hindered or fail to meet expectations because of the aforementioned factors. If it was considered deficient, inferior or lower than that of other individuals, such consideration should only concern knowledge and usage of vocabulary concerning a specific domain, and be only relative to a particular moment, maybe under specific circumstances.

Unfortunately, people often extrapolate and (over)generalise, so they may take (seeming) lexical gaps at a particular time in a speaker’s life or one-off, occasional or momentary lexical infelicities to suggest or unveil more global and overarching conceptualising handicaps or lexical deficits. This does not only lead people to doubt the richness and broadness of that speaker’s conceptual inventory and lexical repertoire, but also to question her conceptualising abilities and what may be labelled her conceptual accuracy –i.e. the capacity to create concepts that adequately capture nuances in elements of reality and facilitate correct reference to those elements– as well as her lexical efficiency or lexical reliability –i.e. the ability to use vocabulary appropriately.

As long as doubts are cast about the amount and accuracy of the concepts available to a speaker and her ability to verbalise them, there arises an unwarranted and unfair wronging which would count as an injustice about that speaker’s conceptualising skills, amount of concepts and expressive abilities. The loosened notion of conceptual competence injustice whose incorporation into the field of linguistic pragmatics I advocated does not necessarily presuppose a previous discrimination or prejudice negatively biasing hegemonic, privileged or empowered individuals against minorities or identities.

Wrong is done, and an epistemic injustice is therefore inflicted, because another person’s conceptual inventory, lexical repertoire and expressive skills are underestimated or negatively evaluated because of (i) perception of a communicative behaviour that is felt not to meet expectations or to be below alleged standards, (ii) tenacious adherence to those expectations or standards, and (iii) unawareness of the likely influence of various factors on performance. This wronging may nonetheless lead to subsequently downgrading that person as regards her communicative competence, discrediting her conceptual accuracy and lexical efficiency/reliability, and denigrating her as a speaker of a language, and, therefore, as an epistemic agent. Relying on all this, further discrimination on other grounds may ensue or an already existing one may be strengthened and perpetuated.

Relevance Theory and Conceptual Competence Injustice

Initially put forth in 1986, and slightly refined almost ten years later, relevance theory is a pragmatic framework that aims to explain (i) why hearers select particular interpretations out of the various possible ones that utterances may have –all of which are compatible with the linguistically encoded and communicated information– (ii) how hearers process utterances, and (iii) how and why utterances and discourse give rise to a plethora of effects (Sperber and Wilson 1986/1995). Accordingly, it concentrates on the cognitive side of communication: comprehension and the mental processes intervening in it.

Relevance theory (Sperber and Wilson 1986/1995) reacted against the so-called code model of communication, which was deeply entrenched in western linguistics. According to this model, communication merely consists of encoding thoughts or messages into utterances, and decoding these in order to arrive at speaker meaning. Since speakers cannot encode everything they intend to communicate and absolute explicitness is practically unattainable, relevance theory portrays communication as an ostensive-inferential process where speakers draw the audience’s attention by means of intentional stimuli. On some occasions these amount to direct evidence –i.e. showing– of what speakers mean, so their processing requires inference; on other occasions, intentional stimuli amount to indirect –i.e. encoded– evidence of speaker meaning, so their processing relies on decoding.

However, in most cases the stimuli produced in communication combine direct with indirect evidence, so their processing depends on both inference and decoding (Sperber and Wilson 2015). Intentional stimuli make manifest speakers’ informative intention –i.e. the intention that the audience create a mental representation of the intended message, or, in other words, a plausible interpretative hypothesis– and their communicative intention –i.e. the intention that the audience recognise that speakers do have a particular informative intention. The role of hearers, then, is to arrive at speaker meaning by means of both decoding and inference (but see below).

Relevance theory also reacted against philosopher Herbert P. Grice’s (1975) view of communication as a joint endeavour where interlocutors identify a common purpose and may abide by, disobey or flout a series of maxims pertaining to communicative behaviour –those of quantity, quality, relation and manner– which articulate the so-called cooperative principle. Although Sperber and Wilson (1986/1995) seriously question the existence of such principle, they nevertheless rest squarely on a notion already present in Grice’s work, but which he unfortunately left undefined: relevance. This becomes the corner stone in their framework. Relevance is claimed to be a property of intentional stimuli and characterised on the basis of two factors:

Cognitive effects, or the gains resulting from the processing of utterances: (i) strengthening of old information, (ii) contradiction and rejection of old information, and (iii) derivation of new information.

Cognitive or processing effort, which is the effort of memory to select or construct a suitable mental context for processing utterances and to carry out a series of simultaneous tasks that involve the operation of a number of mental mechanisms or modules: (i) the language module, which decodes and parses utterances; (ii) the inferential module, which relates information encoded and made manifest by utterances to already stored information; (iii) the emotion-reading module, which identifies emotional states; (iv) the mindreading module, which attributes mental states, and (v) vigilance mechanisms, which assess the reliability of informers and the believability of information (Sperber and Wilson 1986/1995; Wilson and Sperber 2004; Sperber et al. 2010).

Relevance is a scalar property that is directly proportionate to the amount of cognitive effects that an interpretation gives rise to, but inversely proportionate to the expenditure of cognitive effort required. Interpretations are relevant if they yield cognitive effects in return for the cognitive effort invested. Optimal relevance emerges when the effect-effort balance is satisfactory. If an interpretation is found to be optimally relevant, it is chosen by the hearer and thought to be the intended interpretation. Hence, optimal relevance is the property determining the selection of interpretations.

The Power of Relevance Theory

Sperber and Wilson’s (1986/1995) ideas and claims originated a whole branch in cognitive pragmatics that is now known as relevance-theoretic pragmatics. After years of intense, illuminating and fruitful work, relevance theorists have offered a plausible model for comprehension. In it, interpretative hypotheses –i.e. likely interpretations– are said to be formulated during a process of mutual parallel adjustment of the explicit and implicit content of utterances, where the said modules and mechanisms perform a series of simultaneous, incredibly fast tasks at a subconscious level (Carston 2002; Wilson and Sperber 2004).

Decoding only yields a minimally parsed chunk of concepts that is not yet fully propositional, so it cannot be truth-evaluable: the logical form. This form needs pragmatic or contextual enrichment by means of additional tasks wherein the inferential module relies on contextual information and is sometimes constrained by the procedural meaning –i.e. processing instructions– encoded by some linguistic elements.

Those tasks include (i) disambiguation of syntactic constituents; (ii) assignment of reference to words like personal pronouns, proper names, deictics, etc.; (iii) adjustment of the conceptual content encoded by words like nouns, verbs, adjectives or adverbs, and (iv) recovery of unarticulated constituents. Completion of these tasks results in the lower-level explicature of an utterance, which is a truth-evaluable propositional form amounting to the explicit content of an utterance. Construction of lower-level explicatures depends on decoding and inference, so that the more decoding involved, the more explicit or strong these explicatures are and, conversely, the more inference needed, the less explicit and weaker these explicatures are (Wilson and Sperber 2004).

A lower-level explicature may further be embedded into a conceptual schema that captures the speaker’s attitude(s) towards the proposition expressed, her emotion(s) or feeling(s) when saying what she says, or the action that she intends or expects the hearer to perform by saying what she says. This schema is the higher-level explicature and is also part of the explicit content of an utterance.

It is sometimes built through decoding some of the elements in an utterance –e.g. attitudinal adverbs like ‘happily’ or ‘unfortunately’ (Ifantidou 1992) or performative verbs like ‘order’, ‘apologise’ or ‘thank’ (Austin 1962)– and other times through inference, emotion-reading and mindreading –as in the case of, for instance, interjections, intonation or paralanguage (Wilson and Wharton 2006; Wharton 2009, 2016) or indirect speech acts (Searle 1969; Grice 1975). As in the case of lower-level explicatures, higher-level ones may also be strong or weak depending on the amount of decoding, emotion-reading and mindreading involved in their construction.

The explicit content of utterances may additionally be related to information stored in the mind or perceptible from the environment. Those information items act as implicated premises in inferential processes. If the hearer has enough evidence that the speaker intended or expected him to resort to and use those premises in inference, they are strong, but, if he does so at his own risk and responsibility, they are weak. Interaction of the explicit content with implicated premises yields implicated conclusions. Altogether, implicated premises and implicated conclusions make up the implicit content of an utterance. Arriving at the implicit content completes mutual parallel adjustment, which is a process constantly driven by expectations of relevance, in which the more plausible, less effort-demanding and more effect-yielding possibilities are normally chosen.

The Limits of Relevance Theory

As a model centred on comprehension and interpretation of ostensive stimuli, relevance theory (Sperber and Wilson 1986/1995) does not need to be able to identify instances of conceptual competence injustice, as Anderson (2017b) remarks, nor even instances of the negative consequences of communicative behaviour that may be alluded to by means of the broader, loosened notion of conceptual competence injustice I argued for. Rather, as a cognitive framework, its role is to explain why and how these originate. And, certainly, its notional apparatus and the cognitive machinery intervening in comprehension which it describes can satisfactorily account for (i) the ontology of unwarranted judgements of lexical and conceptual (in)competence, (ii) their origin and (iii) some of the reasons why they are made.

Accordingly, those judgements (i) are implicated conclusions which (ii) are derived during mutual parallel adjustment as a result of (iii) accessing some manifest assumptions and using these as implicated premises in inference. Obviously, the implicated premises that yield the negative conclusions about (in)competence might not have been intended by the speaker, who would not be interested in the hearer accessing and using them. However, her communicative performance makes manifest assumptions alluding to her lexical lacunae and mistakes and these lead the hearer to draw undesired conclusions.

Relevance theory (Sperber and Wilson 1986/1995) is powerful enough to offer a cognitive explanation of the said three issues. And this alone was what I aimed to show in my comment to Anderson’s (2017a) work. Two different issues, nevertheless, are (i) the reasons why certain prejudicial assumptions become manifest to an audience and (ii) why those assumptions end up being distributed across the members of certain wide social groups.

As Anderson (2017b) underlines, conceptual competence injustices must necessarily be contextualised in situations where privileged and empowered social groups are negatively-biased or prejudiced against other identities and create patterns of marginalisation. Prejudice may be argued to bring to the fore a variety of negative assumptions about the members of the identities against whom it is held. Using Giora’s (1997) terminology, prejudice makes certain detrimental assumptions very salient or increases the saliency of those assumptions.

Consequently, they are amenable to being promptly accessed and effortlessly used as implicated premises in deductions, from which negative conclusions are straightforwardly and effortlessly derived. Those premises and conclusions spread throughout the members of the prejudiced and hegemonic group because, according to Sperber’s (1996) epidemiological model of culture, they are repeatedly transmitted or made public. This is possible thanks to two types of factors (Sperber 1996: 84):

Psychological factors, such as their relative easiness of storage, the existence of other knowledge with which they can interact in order to generate cognitive effects –e.g. additional negative conclusions pertaining to the members of the marginalised identity– or existence of compelling reasons to make the individuals in the group willing to transmit them –e.g. desire to disempower and/or marginalise the members of an unprivileged group, to exclude them from certain domains of human activity, to secure a privileged position, etc.

Ecological factors, such as the repetition of the circumstances under which those premises and conclusions result in certain actions –e.g. denigration, disempowerment, maginalisation, exclusion, etc.– availability of storage mechanisms other than the mind –e.g. written documents– or the existence of institutions that transmit and perpetuate those premises and conclusions, thus ensuring their continuity and availability.

Since the members of the dominating biased group find those premises and conclusions useful to their purposes and interests, they constantly reproduce them and, so to say, pass them on to the other members of the group or even on to individuals who do not belong to it. Using Sperber’s (1996) metaphor, repeated production and internalisation of those representations resembles the contagion of illnesses. As a result, those representations end up being part of the pool of cultural representations shared by the members of the group in question or other individuals.

The Imperative to Get Competence Correct

In social groups with an interest in denigrating and marginalising an identity, certain assumptions regarding the lexical inventories and conceptualising abilities of the epistemic agents with that identity may be very salient, or purposefully made very salient, with a view to ensuring that they are inferentially exploited as implicated premises that easily yield negative conclusions. In the case of average speakers’ lexical gaps and mistakes, assumptions concerning their performance and infelicities may also become very salient, be fed into inferential processes and result in prejudicial conclusions about their lexical and conceptual (in)competence.

Although utterance comprehension and information processing end upon completion of mutual parallel adjustment, for the informational load of utterances and the conclusions derivable from them to be added to an individual’s universe of beliefs, information must pass the filters of a series of mental mechanisms that target both informers and information itself, and check their believability and reliability. These mechanisms scrutinise various sources determining trust allocation, such as signs indicating certainty and trustworthiness –e.g. gestures, hesitation, nervousness, rephrasing, stuttering, eye contact, gaze direction, etc.– the appropriateness, coherence and relevance of the dispensed information; (previous) assumptions about speakers’ expertise or authoritativeness in some domain; the socially distributed reputation of informers, and emotions, prejudices and biases (Origgi 2013: 227-233).

As a result, these mechanisms trigger a cautious and sceptic attitude known as epistemic vigilance, which in some cases enables individuals to avoid blind gullibility and deception (Sperber et al. 2010). In addition, these mechanisms monitor the correctness and adequateness of the interpretative steps taken and the inferential routes followed while processing utterances and information, and check for possible flaws at any of the tasks in mutual parallel adjustment –e.g. wrong assignment of reference, supply of erroneous implicated premises, etc.– which would prevent individuals from arriving at actually intended interpretations. Consequently, another cautious and sceptical attitude is triggered towards interpretations, which may be labelled hermeneutical vigilance (Padilla Cruz 2016).

If individuals do not perceive risks of malevolence or deception, or do not sense that they might have made interpretative mistakes, vigilance mechanisms are weakly or moderately activated (Michaelian 2013: 46; Sperber 2013: 64). However, their level of activation may be raised so that individuals exercise external and/or internal vigilance. While the former facilitates higher awareness of external factors determining trust allocation –e.g. cultural norms, contextual information, biases, prejudices, etc.– the latter facilitates distancing from conclusions drawn at a particular moment, backtracking with a view to tracing their origin –i.e. the interpretative steps taken, the assumptions fed into inference and assessment of their potential consequences (Origgi 2013: 224-227).

Exercising weak or moderate vigilance of the conclusions drawn upon perception of lexical lacunae or mistakes may account for their unfairness and the subsequent wronging of individuals as regards their actual conceptual and lexical competence. Unawareness of the internal and external factors that may momentarily have hindered competence and ensuing performance, may cause perceivers of lexical gaps and errors to unquestioningly trust assumptions that their interlocutors’ allegedly poor performance makes manifest, rely on them, supply them as implicated premises, derive conclusions that do not do any justice to their actual level of conceptual and lexical competence, and eventually trust their appropriateness, adequacy or accuracy.

A higher alertness to the potential influence of those factors on performance would block access to the detrimental assumptions made manifest by their interlocutors’ performance or make perceivers of lexical infelicities reconsider the convenience of using those assumptions in deductions. If this was actually the case, perceivers would be deploying the processing strategy labelled cautious optimism, which enables them to question the suitability of certain deductions and to make alternative ones (Sperber 1994).

Conclusion

Relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004) does not need to be able to identify cases of conceptual competence injustice, but its notional apparatus and the machinery that it describes can satisfactorily account for the cognitive processes whereby conceptual competence injustices originate. In essence, prejudice and interests in denigrating members of specific identities or minorities favour the saliency of certain assumptions about their incompetence, which, for a variety of psychological and ecological reasons, may already be part of the cultural knowledge of the members of prejudiced empowered groups. Those assumptions are subsequently supplied as implicated premises to deductions, which yield conclusions that undermine the reputation of the members of the identities or minorities in question. Ultimately, such conclusions may in turn be added to the cultural knowledge of the members of the biased hegemonic group.

The same process would apply to those cases wherein hearers unfairly wrong their interlocutors on the grounds of performance below alleged or expected standards, and are not vigilant enough of the factors that could have impeded it. That wronging may be alluded to by means of a somewhat loosened, broadened notion of ‘conceptual competence injustice’ which deprives it of one of its quintessential conditions: the existence of prejudice and interests in marginalising other individuals. Inasmuch as apparently poor performance may give rise to unfortunate unfair judgements of speakers’ overall level of competence, those judgements could count as injustices. In a nutshell, this was the reason why I advocated for the incorporation of a ‘decaffeinated’ version of Anderson’s (2017a) notion into the field of linguistic pragmatics.

Contact details: mpadillacruz@us.es

References

Anderson, Derek. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017a): 210-223.

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017b): 34-39.

Austin, John L. How to Do Things with Words. Oxford: Clarendon Press, 1962.

Bachman, Lyle F. Fundamental Considerations in Language Testing. Oxford: Oxford University Press, 1990.

Brown, Penelope, and Stephen C. Levinson. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press, 1987.

Canale, Michael. “From Communicative Competence to Communicative Language Pedagogy.” In Language and Communication, edited by Jack C. Richards and Richard W. Schmidt, 2-28. London: Longman, 1983.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Celce-Murcia, Marianne, Zoltán Dörnyei, and Sarah Thurrell. “Communicative Competence: A Pedagogically Motivated Model with Content Modifications.” Issues in Applied Linguistics 5 (1995): 5-35.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Giora, Rachel. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics 8 (1997): 183-206.

Grice, Herbert P., 1975. “Logic and Conversation.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry L. Morgan, 41-58. New York: Academic Press, 1975.

Hymes, Dell H. “On Communicative Competence.” In Sociolinguistics. Selected Readings, edited by John B. Pride and Janet Holmes, 269-293. Baltimore: Penguin Books, 1972.

Ifantidou, Elly. “Sentential Adverbs and Relevance.” UCL Working Papers in Linguistics 4 (1992): 193-214.

Jary, Mark. “Relevance Theory and the Communication of Politeness.” Journal of Pragmatics 30 (1998): 1-19.

Jary, Mark. “Two Types of Implicature: Material and Behavioural.” Mind & Language 28, no. 5 (2013): 638-660.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary.” Social Epistemology: A Journal of Knowledge, Culture and Policy 25, no. 1 (2011): 15-35.

Michaelian, Kourken. “The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication.” Episteme 10, no. 1 (2013): 37-59.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Origgi, Gloria. “Epistemic Injustice and Epistemic Trust.” Social Epistemology: A Journal of Knowledge, Culture and Policy 26, no. 2 (2013): 221-235.

Padilla Cruz, Manuel. “Vigilance Mechanisms in Interpretation: Hermeneutical Vigilance.” Studia Linguistica Universitatis Iagellonicae Cracoviensis 133, no. 1 (2016): 21-29.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Interlocutors-related and Hearer-specific Causes of Misunderstanding: Processing Strategy, Confirmation Bias and Weak Vigilance.” Research in Language 15, no. 1 (2017b): 11-36.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Spencer-Oatey, Hellen D. “Reconsidering Power and Distance.” Journal of Pragmatics 26 (1996): 1-24.

Sperber, Dan. “Understanding Verbal Understanding.” In What Is Intelligence? edited by Jean Khalfa, 179-198. Cambridge: Cambridge University Press, 1994.

Sperber, Dan. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell, 1996.

Sperber, Dan. “Speakers Are Honest because Hearers Are Vigilant. Reply to Kourken Michaelian.” Episteme 10, no. 1 (2013): 61-71.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “Beyond Speaker’s Meaning.” Croatian Journal of Philosophy 15, no. 44 (2015): 117-149.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wharton, Tim. Pragmatics and Non-verbal Communication. Cambridge: Cambridge University Press, 2009.

Wharton, Tim. “That Bloody so-and-so Has Retired: Expressives Revisited.” Lingua 175-176 (2016): 20-35.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn, and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Wilson, Deirdre and Tim Wharton. “Relevance and Prosody.” Journal of Pragmatics 38 (2006): 1559-1579.

[1] Following a relevance-theoretic convention, reference to the speaker will be made through the feminine third person singular personal pronoun, while reference to the hearer will be made through its masculine counterpart.

Author Information: Lyudmila A. Markova, Russian Academy of Sciences, Markova.lyudmila2013@yandex.ru

Markova, Lyudmila A. 2013.”Context and Naturalism in Social Epistemology.” Social Epistemology Review and Reply Collective 2 (9): 33-35.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-XC

Please refer to:

Two notions, context and naturalism, are the subjects of analysis of both Ilya Kasavin (“Reply to Rockmore”, 2013) and Tom Rockmore (“Kasavin on Social Epistemology and Naturalism: A Critical Reply”, 2013). I agree with many of Kasavin and Rockmore’s points — especially with those that concern the difficulties in the classical (traditional) epistemology.

Context

Rockmore writes (2013):

I agree with Kasavin that context is indeed problematic. Yet I would like to resist the effort either to free cognitive claims from context or, on the contrary, to absorb the former into the latter. … The proper relationship seems to me to be a kind of constitutive tension that can never be overcome and which must be construed not in general but rather on a case-by-case basis in order to understand the weight of the particular cognitive claim (11).

Kasavin (2013) refines his understanding: “… [T]he context of science is the whole scope of its current sociality and its cultural history — a kind of independent reality accompanying science during its temporal existence” (26). “… [T]he quantity of alternatives is limited at a given moment” (28). Continue Reading…

Author Information: Ilya Kasavin, Russian Academy of Sciences itkasavin@gmail.com

Kasavin, Ilya. 2013. “A Further Reply to Rockmore.” Social Epistemology Review and Reply Collective 2 (5) 12-14.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-JZ

Please refer to:

It is my pleasure to turn once more to the exchange with Tom Rockmore. I appreciate his critical remarks as they have forced me to express my position more radically.

We agree on a number of points. We both wish to avoid an overly simplistic appeal to a contextual understanding of meaning. But when Rockmore wants to make a stronger claim that context functions not only to understand meaning, but also to justify truth claims, is this really offering a stronger position? Is it reasonable to separate definitively meaning from truth clams? Don’t truth claims have meaning? Continue Reading…

Author Information: Tom Rockmore, Duquesne University, Institute of Foreign Philosophy, Peking University, rockmore@duq.edu

Rockmore, Tom. 2013. “Further reply to Kasavin: Context, Meaning and Truth.” Social Epistemology Review and Reply Collective 2 (3): 22-24.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-Hk

Please refer to:

In my initial response to Kasavin’s paper, I tried to clarify his position in sketching a different view of the relation between cognition and context. My objective now is to stress and to justify the difference between our two views of the relation of thought to context. According to Kasavin, we are simultaneously wholly free and wholly determined by context. I contend, on the contrary, that we are never wholly free, nor ever wholly determined by context.

In his rejoinder, Kasavin isolates three statements in my response that he maintains subtly misrepresent his position. He further clarifies his view with comments on what he calls underdetermination and the explanatory value of context before concluding with a remark on freedom and determination in reference to our disagreement. Let us leave aside the difficulty about whether I successfully captured his position in my initial response in order to concentrate on the present version of his view. According to Kasavin, “underdetermination” means “the complexity of determination”. To elucidate this claim, he suggests the explanatory value of context, and he points out that different epistemic agents working independently achieve similar results.

I take Kasavin’s central claim to be that we appeal to context to understand meaning. He rightly wishes to avoid an overly simplistic version of this point. I want to make a stronger claim since I think that context functions not only to understand meaning but also to justify truth claims. Kasavin gives examples from literature and from mathematics in which similar backgrounds led in practice to similar results. That is certainly the case, but it does not follow that if results in similar situations are similar that this justifies similar truth claims. I do not know how one could formulate a truth claim about the poems by Rilke, Svetaeva and Pasternak about Maria Magdalena. It is further unclear that the cognitive value of the independent discovery of non-Euclidean geometry by Gauss, Lobachevski and Bolyai depends in some way on their similar contexts. One might prefer, say, one version of non-Euclidean geometry over alternatives. But the correctness of a non-Euclidean approach to geometry depends in turn on prior views about what constitutes an appropriate approach to geometry, including current conceptions of geometrical proof, axioms, postulates, and so on.

“Underdetermination” is often taken to refer to the inability to decide which among several views is correct on rational grounds. Descartes, for instance, appeals to a form of underdetermination in his dream and his demon arguments. In both cases we cannot decide on rational grounds whether we are being deceived. Quine suggests that the available evidence is insufficient to decide which belief we should hold about the facts. In his view of the indeterminacy of translation, He famously insists on the poverty of evidence in his gavagai example. In philosophy of science undetermination is often thought to be problematic for scientific realism.

Kasavin, who uses the term “underdetermination” in a different way, suggests that knowledge claims depend on context for meaning. That seems correct. Yet, since meaning is not truth, they need to be distinguished. There are many theories of meaning. There are also many theories of truth. Here we do not need to decide between different theories of meaning and truth. It will be sufficient to indicate a basic way that meaning and truth differ. A very rough way to put the point is that “meaning” refers to what the author conceivably has in mind, say in formulating a theory, but “truth” refers to the correctness of the cognitive claim. Thus “meaning” might imply a relationship between signs and that they stand for, but “truth” refers to the relation to the facts or reality. Hence, I am suggesting that meaning is more than simply identifying truth conditions since what someone has in mind, hence means to say, and whether that statement is correct, or true, are not merely equivalent.

I agree with Kasavin that context functions to identify meaning. Yet I also believe that context functions to justify or to legitimate claims to know. If that is correct, then the truth of the truth claim could be said to be doubly dependent on context with respect to meaning as well as to the acceptability of one claim over other possible contenders. Kasavin appears to me to be asserting a version of the familiar view that a claim to truth does not depend on but is rather independent of context. I take him to be saying that as concerns cognitive claims we are completely free, and that means we can in all cases and in fact must choose between different alternatives. On the contrary, I contend that we not free in the precise sense that our views of what is true are not independent of but rather dependent on the context in which they are formulated. Continue Reading…

Reply to Rockmore, Ilya Kasavin

SERRC —  January 29, 2013 — 1 Comment

Author Information: Ilya Kasavin, Russian Academy of Sciences, itkasavin@gmail.com, Ilya Kasavin: Website

Kasavin, Ilya. 2013. “Reply to Rockmore.” Social Epistemology Review and Reply Collective 2 (2): 26-29.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-FK

Please refer to: Rockmore, Tom. 2013. “Kasavin on Social Epistemology and Naturalism: A Critical Reply.” Social Epistemology Review and Reply Collective 2 (2): 8-11.

I appreciate very much the comments Tom Rockmore provided on my paper, putting its main problem into a historical/philosophical context. I will address three claims Rockmore makes that seem not entirely correct in describing my position. Hopefully, my examination will help make the rest of my reply more transparent.

Rockmore (2013) asserts:

1.“I agree with Kasavin that context is indeed problematic” (11).

2. “Kasavin depicts philosophy as relying on science, hence as interdisciplinary” (8).

3. “ … [H]e claims that the result, or so-called discourse, is not bounded, hence is not contextual in principle” (9).

Clarifications

Replying to (1), my intention was not to problematize context as it is, but to confront the oversimplified concept of context and its naïve epistemological application. For instance, the context of science is the whole scope of its current sociality and its cultural history — a kind of independent reality accompanying science during its temporal existence. It is usually conceptualized as a limited scope of socio-cultural phenomena that can be analyzed empirically by sociologists, historians, psychologists, anthropologists etc. So, philosophically speaking, science exits in, and is essentially determined by, context. But, interdisciplinarily speaking, a part of science is always partially determined by a part of context. A philosophical view of science can hardly replace an interdisciplinary one and vice versa. They are complementary.

My negation of (2) follows from my above comments. Philosophy does not rely on science in the sense that philosophical problems can be solved by scientific means. Philosophy does rely on science to provide empirical material for philosophical analysis and offer a counterpart in an exchange of views. An interdisciplinary epistemology means epistemology that takes seriously scientific facts and carries on a dialogue with science (and with other cognitive practices as well), rather than epistemology naturalized and reduced to various concrete sciences.

Evidently I cannot accept (3) as far as any discourse, i.e., vivid cognitive process, non-stop language game, or speech is regarded only in terms of, and in interrelation to, context as relatively stable cognitive results laying outside the research focus and taken for granted (e.g., presuppositions, natural attitudes, spheres of evidence). Discourse is also opposed to text. Text is a system of knowledge linguistically constituted, relatively finished and expressing, therefore, a certain intellectual culture. I use the term “discourse” to dub a process of scientific discovery as opposed to justification; philosophical inquiry or reflexion as contrasted to a philosophical system. Continue Reading…

Author Information: Tom Rockmore, Duquesne University, Institute of Foreign Philosophy, Peking University, rockmore@duq.edu

Rockmore, Tom. 2013. “Kasavin on Social Epistemology and Naturalism: A Critical Reply.” Social Epistemology Review and Reply Collective 2 (2): 8-11.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-EJ

Please refer to: Ilya Kasavin “To What Extent Could Social Epistemology Accept the Naturalistic Motto?” Social Epistemology Volume 26, Issue 3-4, 2012

In his richly detailed, incisive paper, Ilya Kasavin studies the compatibility between “social epistemology” and “naturalism” in asking: can there be a naturalized form of social epistemology? His answer seems to be that a weak form of social epistemology is independent of context.

Neither “social epistemology” nor “naturalism” is a natural kind nor cuts reality at the joints as it were. Each presents a proposed solution to the cognitive problem after the decline of Kant’s transcendental maneuver. The latter approach is at least partly a reaction to Hume, or more precisely to Humean naturalism. Since Hume can be described as a naturalist, post-Kantian naturalism represents a qualified return to a form of an earlier position after Kant’s intervention in the debate. In the case of social epistemology, we are confronted with the consequences of the post-Kantian German idealist transformation of the critical philosophy in a social and historical direction beginning as soon as Fichte.
Continue Reading…