Archives For contextualism

Author Information: Robin McKenna, University of Liverpool, r.j.mckenna@liverpool.ac.uk.

McKenna, Robin. “McBride on Knowledge and Justification.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 53-59.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-417

Image by Ronan Shahnav via Flickr / Creative Commons

 

I would like to thank the editors of the Social Epistemology Review and Reply Collective for giving me the opportunity to review Mark McBride’s rich and rewarding book. To begin, I will give a—fairly high-level—overview of its contents. I will then raise some concerns and make some (mildly) critical comments.

Overview

The book is split into two parts. Part 1 concerns the issue of basic knowledge (and justification), whereas the second concerns (putative necessary) conditions on knowledge (specifically, conclusive reasons, sensitivity and safety conditions). We can start with Part 1. As McBride defines it, basic knowledge is “knowledge (or justification) which is immediate, in the sense that one’s justification for the known proposition doesn’t rest on any justification for believing other propositions” (p. 1).

Two central issues in Part 1 are (i) what, exactly, is wrong with Moore’s “proof” of the external world (Chapter 1) (ii) what, exactly, is wrong with inferences that yield “easy knowledge” (Chapters 2-3). Take these arguments, which for ease of reference I’ll call MOORE and EASY-K respectively:

MOORE:

(Visual appearance as of having hands).
1-M. I have hands.
2-M. If I have hands, an external world exists.
3-M. An external world exists.

EASY-K:

(Visual appearance as of a red table).
1-EK. The table is red.
2-EK. If the table is red, then it is not white with red lights shining on it.
3-EK. The table is not white with red lights shining on it.

It seems like a visual appearance as of having hands can give one knowledge of 1-M, and 2-M seems to be knowable a priori. But it seems wrong to hold that one can thereby come to know 3-M. (And mutatis mutandis for EASY-K and 3-EK).

I want to single out three of McBride’s claims about MOORE and EASY-K. First, it is commonly taken that “dogmatist” responses to MOORE (such as Pryor 2000) are at a disadvantage with respect to “conservative” responses (such as Wright 2004). The dogmatist holds that having a visual appearance as of hands provides immediate warrant for 1-M, whereas the conservative holds that one can have warrant for 1-M only if one has a prior entitlement to accept 3-M. Thus the dogmatist seems forced to accept that warrant can “transmit” from the premises of MOORE to the conclusion, whereas the conservative can deny that warrant transmission occurs.

In Chapter 1 McBride turns this on its head. First, he argues that, while a conservative such as Crispin Wright can maintain that the premises of MOORE don’t transmit “non-evidential” warrant to the conclusion, he must allow that “evidential” warrant does transmit from the premises to the conclusion. Second, he argues that Wright cannot avail himself of what McBride (following Davies 2004) takes to be a promising diagnosis of the real problem with MOORE. According to Martin Davies, MOORE is inadequate because it is of no use in the epistemic project of settling the question whether the external world exists. But, for Wright, there can be no such project, because the proposition that the external world exists is the “cornerstone” on which all epistemic projects are built.

Second, in Chapter 3 McBride seeks to show that the dogmatist can supplement Davies’ account of the problem with Moore’s proof in order to diagnose the problem with EASY-K. According to McBride, EASY-K is problematic not just in that it is of no use in settling the question whether the table is not white with red lights shining on it, but also in that there are all sorts of ways in which one could settle this question (e.g. by investigating the lighting sources surrounding the table thoroughly).

Thus, EASY-K is problematic in a way that MOORE isn’t: while one could avail oneself of a better argument for the conclusion of EASY-K, it is harder to see what sort of argument could improve on MOORE.

Third, while Part 1 is generally sympathetic to the dogmatist position, Chapter 5 argues that the dogmatist faces a more serious problem. The reader interested in the details of the argument should consult Chapter 5. Here, I just try to explain the gist. Say you endorse a closure principle on knowledge like this:

CLOSURE: Necessarily, if S knows p, competently deduces q from p, and thereby comes to believe q, while retaining knowledge of p throughout, then S knows q (p. 159).

It follows that, if one comes to know 1-EK (the table is red) by having an appearance as of a red table, then competently deduces 3-EK (the table is not white with red lights shining on it) from 1-EK while retaining knowledge of 1-EK, then one knows 3-EK. But—counter-intuitively—having an appearance as of a red table can lower the credence one ought to have in 3-EK (see pp. 119-20 for the reason why).

It therefore seems inarguable that, if you are in a position to know 3-EK after having the appearance, you must have been in a position to know the 3-EK prior to the appearance. So it seems like the conservative position must be right after all. In order for your appearance as of a red table to furnish knowledge that there is a red table you must have been in a position to know that the table was not white with red lights shining on it prior to having the appearance as of a red table.

The second part of McBride’s book concerns putative (necessary) conditions on knowledge, in particular conclusive reasons (Chapter 6), sensitivity (Chapter 7) and safety (Chapter 8). McBride dedicates a chapter to each condition; the book finishes with a (brief) application of safety to legal knowledge (Chapter 9). While most epistemologists tend to argue that either sensitivity or (exclusive) safety are a (necessary) condition on knowledge, McBride provides a (qualified) defense of both.

In the case of sensitivity, this is in part because, if sensitivity were a condition on knowledge, then—as Nozick (1981) famously held—CLOSURE would be false, and so the argument against dogmatism (about knowledge) in Chapter 5 would be disarmed. Because of the centrality of sensitivity to the argument in Part 1, and because the chapters on conclusive reasons and sensitivity revolve around similar issues, I focus on sensitivity in what follows.

Here is an initial statement of sensitivity:

SENSITIVITY: S knows p only if S sensitively believes p, where S sensitively believes p just in case, were p false, S would not believe p (p. 160).

Chapter 7 (on sensitivity) is largely concerned with rebutting an objection from John Hawthorne (2004) to the effect that the sensitivity theorist must also reject these two principles:

EQUIVALENCE: If you know a priori that p and q are equivalent and you know p, then you are in a position to know q.

DISTRIBUTION: If one knows p and q, then one is in a position to know p and to know q.

Suppose I have an appearance as of a zebra. So I know:

(1) That is a zebra.

By EQUIVALENCE I can know:

(2) That is a zebra and that is not a cleverly disguised mule.

So by DISTRIBUTION I can know:

(3) That is not a cleverly disguised mule.

But, by SENSITIVITY, while I can know (1), I can’t know (3) because, if I were looking at a cleverly disguised mule, I would still believe I was looking at a zebra. Hawthorne concludes that the sensitivity theorist must deny a range of plausible principles, not just CLOSURE.

McBride’s basic response is that, while SENSITIVITY is problematic as stated, it can be modified in such a way that the sensitivity-theorist can deny EQUIVALENCE but keep DISTRIBUTION. More importantly, this rejection of EQUIVALENCE can be motivated on the grounds that initially motivate SENSITIVITY. Put roughly, the idea is that simple conjunctions like (4) already cause problems for SENSITIVITY:

(4) I have a headache and I have all my limbs.

Imagine you form the belief in (4) purely from your evidence of having a headache (and don’t worry about how this might be possible). While you clearly don’t know (4), your belief does satisfy SENSITIVITY, because, if (4) were false, you wouldn’t still believe it (if you didn’t have a headache, you wouldn’t believe you did, and so you wouldn’t believe (4)).

The underlying problem is that SENSITIVITY tells you to go the nearest possible world in which the relevant belief is false and asks what you believe there, but a conjunctive belief is false so long as one of the conjuncts is false, and it might be that one of the conjuncts is false in a nearby possible world, whereas the other is false in a more distant possible world. So the sensitivity theorist needs to restrict SENSITIVITY to atomic propositions and add a new condition for conjunctive propositions:

SENSITIVITY*: If p is a conjunctive proposition, S knows p only if S believes each of the conjuncts of p sensitively (p. 167).

If we make this modification, the sensitivity theorist now has an independent reason to reject EQUIVALENCE, but is free to accept DISTRIBUTION.

Critical Discussion

While this only touches on the wealth of topics discussed in McBride’s book, I will now move on to the critical discussion. I will start by registering two general issues about the book. I will then develop two criticisms in a little more length, one for each part of the book.

First, while the book makes compelling reading for those already versed in the literatures on transmission failure, easy knowledge and modal conditions on knowledge, the central problematics are rarely motivated at any length. Moreover, while McBride does draw numerous (substantive) connections between the chapters, the book lacks a unifying thesis. All this to say: This is maybe more of a book for the expert than the novice. But the expert will find a wealth of interesting material to chew over.

Second, readers of the Collective might find the individualism of McBride’s approach striking. McBride is almost exclusively concerned with the epistemic statuses of individuals’ beliefs, where those beliefs are formed through simple processes like perception and logical inference. The one part of the book that does gesture in a more social direction (McBride’s discussion of epistemic projects, and the dialectical contexts in which they are carried out) is suggestive, but isn’t developed in much detail.

Turning now to more substantive criticisms, in Part 1 McBride leans heavily on Davies’ solution to the problem with MOORE. I want to make two comments here. First, it is natural to interpret Davies’ solution as an inchoate form of contextualism (DeRose 1995; Lewis 1996): whether MOORE (and EASY-K?) transmits warrant to its conclusion depends on the context in which one runs the inference, in particular, the project in which one is engaged.

This raises a host of questions. For example: does McBride hold that, if we keep the context (project) fixed, no transmission failure occurs? That is: if we’re working with the (easier) project of deciding what to believe, does an instance of MOORE transmit warrant from premises to conclusion? If so, then if we’re working with the (harder) project of settling the question, does an instance of MOORE fail to transmit warrant? (This would fit with the more general contextualist line in response to the skeptical problem, so this is only a request for clarification).

Second, and more importantly, we need to distinguish between the project of fully settling the question whether p and the project of partially settling the question whether p. Let’s grant McBride (and Davies) that someone who runs through an instance of MOORE has not fully settled the question whether there is an external world. But why think that—at least by the dogmatist’s lights—they haven’t partially settled the question? If dogmatism is true, then having the appearance as of a hand provides immediate warrant for believing that one has a hand, and so, via MOORE, for believing that there is an external world.

McBride (like many others) finds this conclusion unpalatable, and he invokes the distinction between the project of deciding what to believe and the project of settling the question in order to avoid it. But this distinction is overly simplistic. We can settle questions for different purposes, and with different degrees of stability (cf. “the matter is settled for all practical purposes”). The dogmatist seems forced to allow that MOORE is perfectly good for settling the question of whether there is an external world for a range of projects, not just one.

(I have a parallel worry about the solution to the problem of easy knowledge. Let’s grant McBride that one problem with EASY-K is that there are far better ways of trying to establish that the table is not white but bathed in red light. But why think that—at least by the dogmatist’s lights—it isn’t a way of trying to establish this? To point out that there are better ways of establishing a conclusion is not yet to show that this particular way is no way at all of establishing the conclusion).

Finally, in his response to Hawthorne’s objection to the sensitivity theorist McBride is at pains to show that his modification of SENSITIVITY isn’t ad hoc. To my mind, he does an excellent job of showing that the sensitivity theorist should reject EQUIVALENCE for reasons entirely independent of Hawthorne’s objection.

This suggests (at least to me) that the problem is not one of ad hocness, but rather that sensitivity theorists are forced to endorse a wide range of what Keith DeRose (1995) calls “abominable conjunctions” (cf. “I know that I have hands, but I don’t know that I’m not a handless brain in a vat”). DeRose’s own response to this problem is to embed something like SENSITIVITY in a contextualist theory of knowledge attributions. DeRose proposes the following “rule”:

Rule of Sensitivity: When it’s asserted that S knows (or doesn’t know) p, then, if necessary, enlarge the sphere of epistemically relevant worlds so that it at includes the closest worlds in which p is false (cf 1995, 37).

His idea is that, when the question of whether S knows p becomes a topic of conversation, we expand the range of worlds in which S’s belief must be sensitive. Imagine I assert “I know that I have hands”. In order for this assertion to be true, it must be the case that, if I didn’t have hands, I wouldn’t believe that I did.

But now imagine I assert “I know that I’m not a handless brain in a vat”. In order for this new assertion to be true, it must be the case that, if I were a handless brain in a vat, I wouldn’t believe that I wasn’t. Plausibly, this will not be the case, so I can’t truly assert “I know that I’m not a handless brain in a vat”. But no abominable conjunction results, because I can no longer truly assert “I know that I have hands” either.

My suggestion is that, if McBride were to adopt DeRose’s contextualist machinery, he would not only have a way of responding to the problem of abominable conjunctions, but also an interesting modification to DeRose’s “rule of sensitivity”.

For note that DeRose’s rule seems subject to the same problem McBride sees with SENSITIVITY: when I assert “I have a headache and I have all my limbs” we only need to expand the range of worlds to include worlds in which I don’t have a headache, and so my assertion will remain true in the updated context created by my assertion. Further, adopting this suggestion would furnish another link between Part 1 and Part 2: solving the problem of basic knowledge and formulating a satisfactory sensitivity condition both require adopting a contextualist theory of knowledge attributions.

Contact details: r.j.mckenna@liverpool.ac.uk

References

Davies, Martin. 2004. ‘Epistemic Entitlement, Warrant Transmission and Easy Knowledge’. Aristotelian Society Supplementary Volume 78 (1): 213–245.

DeRose, Keith. 1995. ‘Solving the Skeptical Problem’. Philosophical Review 104 (1): 1–52.

Hawthorne, John. 2004. Knowledge and Lotteries. Oxford University Press.

Lewis, David. 1996. ‘Elusive Knowledge’. Australasian Journal of Philosophy 74 (4): 549–67.

Nozick, Robert. 1981. Philosophical Explanations. Harvard University Press.

Pryor, James. 2000. ‘The Skeptic and the Dogmatist’. Noûs 34 (4): 517–549.

Wright, Crispin. 2004. ‘Warrant for Nothing (and Foundations for Free)?’ Aristotelian Society Supplementary Volume 78 (1): 167–212.

Author Information: Amanda Bryant, Trent University, amandabryant@trentu.ca

Bryant, Amanda. “Each Kuhn Mutually Incommensurable.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 1-7.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3XM

Image by Denis Defreyne via Flickr / Creative Commons

 

This volume is divided into four parts, in which its contributors variously Question, Defend, Revise, or Abandon the Kuhnian image of science. One immediately wonders: what is this thing, the Kuhnian Image of Science? It isn’t a question that can be decisively or quickly settled, of course. Perhaps one of the reasons why so much has been written on Kuhn’s philosophy of science is that it gives rise to such rich interpretive challenges.

Informed general philosophy of science readers will of course know the tagline version of Kuhn’s view — namely, that the development of science unfolds in wholesale revolutions of scientific paradigms that are in some sense incommensurable with one another. However, one might think that whatever the image of science at issue in this volume is, it should be a sharper image than that.

Many Thomases Kuhn

But of course there isn’t really a single, substantive, cohesive, uncontroversial image at issue. Alexandra Argamakova rightly points out in her contribution, “there exist various images of science belonging to different Thomas Kuhns at different stages of his work life and from different perspectives of interpretation, so the target for current analysis turns out to be less detectable” (46). Rather, the contributors touch on various aspects of Kuhn’s philosophy, variously interpreted — and as such, multiple Kuhnian images emerge as the volume unfolds. That’s just as it should be. In fact, if the volume had propped up some caricature of Kuhn’s views as the Kuhnian image of science, it would have done a disservice both to Kuhn and to his many interpreters.

One wonders, too, whether the so-called Kuhnian image of science is really so broadly endorsed as to be the potential subject of (echoing Kuhn’s own phrase) a ‘decisive transformation’. In his introduction, Moti Mizrahi emphasizes Kuhn’s undeniable influence. Kuhn has, Mizrahi points out, literally tens of thousands of citations; numerous books, articles, and journal issues devoted to his work; and a lasting legacy in the language of academic and public discourse. While all of this signals influence, it’s clearly no indication of agreement.

To be fair, Mizrahi acknowledges the “fair share” of Kuhn critics (2). Nevertheless, if the prospect of decisively transforming the Kuhnian image of science were to be a serious prospect, then the image would have to be widely accepted and enjoy a lasting relevance. However, Argamakova again rightly emphasizes that Kuhn’s philosophy of science “never fully captured the intellectual market” (45) and “could not be less attractive for so many minds!” (47). Moreover, in a remarkable passage in his contribution, Howard Sankey describes a central component of the so-called Kuhnian image of science as as an old battlefield and a dead issue:

Returning to the topic from the perspective of the contemporary scene in the philosophy of science is like visiting a battlefield from a forgotten war. The positions of the warring sides may still be made out. But the battlefield is grown over with grass. One may find evidence of the fighting that once took place, perhaps bullet marks or shell holes. But the fighting ceased long ago. The battle is a thing of the past.

The problem of incommensurability is no longer a live issue. The present chapter has taken the form of a post-mortem examination of a once hotly debated but now largely forgotten problem from an earlier period in the philosophy of science. (87)

If the same holds true for the rest of the Kuhnian image (or images), then the volume isn’t exactly timely.

But dead philosophical issues don’t always stay dead. Or rather, we’re not always right to pronounce them dead. In 1984, Arthur Fine famously proclaimed scientific realism “well and truly dead” (in The Natural Ontological Attitude), and clearly he was quite wrong. At any rate, we may find interest in an issue, dead or not, and there is certainly much of it to be found in this volume. I have been asked to focus my comments on the second half of the book. As such, I will discuss the Introduction, as well as Parts I and II in brief, then I will discuss parts III and IV at greater length.

On the Incommensurable

In his Introduction, Mizrahi argues that, far from initiating a historical turn in the philosophy of science, Kuhn was ‘patient zero’ for anecdotiasis — “the tendency to use cherry-picked anecdotes or case studies… to support general claims (about the nature of science as a whole)” (3). Mizrahi argues that anecdotiasis is pervasive, since significant proportions of articles in the PhilSci-Archive and in leading philosophy of science journals contain the phrase ‘case study’.

But neither using the phrase ‘case study’ nor doing case studies is inherently or self-evidently problematic. Case studies can be interesting, informative, and evidential. Of course the challenges are not to ignore relevant problem cases, not to generalize hastily, and not to assign undue evidential weight to them. But if we are to suppose that all or most philosophers of science who use case studies fail to meet those challenges, we will need a substantial body of evidence.

Part I begins with Mizrahi’s contribution, which the successive contributions all engage. In it, he defines taxonomic incommensurability as conceptual incompatibility between new and old theories. Against those who claim that Kuhn ‘discovered’ incommensurability, Mizrahi argues that there are no good deductive or inductive arguments for taxonomic incommensurability. He cites just two authors, Eric Oberheim and Paul Hoyningen-Huene, who use the language of discovery to characterize incommensurability. As such, it isn’t clear that the assumption Mizrahi takes pains to reject is particularly widespread.

Nevertheless, even if everyone universally agreed that there are no legitimate cases of incommensurability, it would still be useful to know why they’d be justified in so thinking. So the work that Mizrahi does to establish his conclusion is valuable. He shows the dubious sorts of assumptions that arguments for the taxonomic incommensurability thesis would hang on.

Argamakova’s helpful and clear contribution lays out three general types of critique with respect to Kuhn’s view of scientific development — ambiguity, inaccuracy, and limitation — and raises, if tentatively, concerns about Kuhn’s universalist ambitions. She might have been more explicit with respect to the force and scope of her comments on universalism — in particular, whether she sees the flaws in Kuhn’s theory as ultimately stemming from his attempts at universal generalizations, and to what extent her concerns extend beyond Kuhn to general philosophy of science.

Seungbae Park advances several arguments in response to Kuhn’s incommensurability thesis. One such argument takes up Kuhn’s analogy in The Structure of Scientific Revolutions (henceforth Structure) between the development of science and the evolution of organisms. Park suggests that in drawing the analogy, Kuhn illicitly assumes the truth of evolutionary theory. He doesn’t consider that Kuhn could adopt the language of a paradigm (for the purposes of drawing an analogy, no less!) without committing to the literal truth of that paradigm.

Park also claims that “it is self-defeating for Kuhn to invoke a scientific theory to give an account of science that discredits scientific claims” (66), when it’s not clear that the analogy is at all integral to Kuhn’s account. Kuhn could, for instance, have ascribed the same characteristics to theory change without referring to evolutionary theory at all.

Sankey’s illuminating contribution fills in the interpretive background on incommensurability — the semantic version of Kuhn’s incommensurability thesis, in particular. He objects, with Mizrahi, to the language of discovery used by Oberheim and Hoyningen-Huene with respect to incommensurability. He argues, convincingly, that the purported paradigm shift that allowed Kuhn to finally comprehend Aristotle’s physics isn’t a case of incommensurability, but rather of comprehension after an initial failure to understand. While this doesn’t establish his conclusion that no cases of incommensurability have been established (76), it does show that a historically significant purported case is not genuine.

Vasso Kindi fills in some historical detail regarding the positivist image of science that Kuhn sought to replace and the “stereotypical” image attributed to him (96). She argues that Kuhn’s critics (including by implication several of her co-contributors) frequently attack a strawman — that, notwithstanding Kuhn’s avowed deference to history, the Kuhnian image of science is not meant to be a historical representation, and so doesn’t need to be supported by historical evidence. It is, rather, a “a philosophical model that was used to challenge an ideal image of science” (95).

Finally, Lydia Patton emphasizes the practical dimension of Kuhn’s conception of paradigms in Structure. It ought to be uncontroversial that on Kuhn’s early characterization a paradigm is not merely a theory, but a series of epistemic, evaluative, and methodological practices, too. But Patton argues that there has been too strong a semantic tendency in the treatment of Kuhnian paradigms (including by the later Kuhn himself). She argues for the greater interest and value of a practical lens on Kuhn’s project for the purposes of understanding and explaining science.

Vectors of Glory

Andrew Aberdein’s contribution deals with the longstanding and intriguing question of whether there are revolutions in mathematics. He imports to that discussion distinctions he drew in previous work among so-called glorious, inglorious, and paraglorious revolutions, in which, respectively, key components of the theory are preserved, lost, or preserved with new additions. Key components are, he says, “at least all components without which the theory could not be articulated” (136).

He discusses several examples of key shifts in mathematical theory and practice that putatively exemplify certain of these classes of revolution. The strength of the paper is its fascinating examples, particularly the example of Inter-Universal Teichmüller theory, which, Aberdein explains, introduces such novel techniques and concepts that some leading mathematicians say its proofs read as if they were “from the future, or from outer space” (145).

Aberdein doesn’t falsely advertise his thesis. He acknowledges that “it is not easy to determine whether a given episode is revolutionary” (140), and claims only that certain shifts “may be understood” as revolutionary (149) — that the cases he offers are putative mathematical revolutions. As to how we should go about identifying putative mathematical revolutions, Aberdein suggests we look directly for conceptual shifts (or ‘sorites-like’ sequences of shifts) in which key components have been lost or gained.

A fuller discussion of these diagnostics is needed, since the judgment of whether there are revolutions (genuine or putative) in mathematics will hang largely on diagnostics such as these. Is any key conceptual shift sufficient? If so, have we really captured the spirit of Kuhn’s view, given that Kuhn seems to ascribe a certain momentousness to revolutions? If the conceptual shift has to be substantial, how substantial, and how should we gauge its substantiality? Without some principled, non-arbitrary, and non-question-begging standards for what counts as a revolution, we cannot hope to give a serious answer to the question of whether there are, even putatively, revolutions in mathematics.

The paper would also have benefited from a more explicit discussion of what a mathematical paradigm is in the first place, especially as compared to a scientific one. We can infer from Aberdein’s examples that conceptions of number, ratio, proportion, as well as systems of conjecture and mathematical techniques belong to mathematical paradigms — but explicit comment on this would have been beneficial.

Moreover, Aberdein sees an affinity between mathematics and science, commenting toward the end of the paper that the methodology of mathematics is not so different from that of science, and that “the story we tell about revolutions [should] hold for both science and mathematics” (149). These are loaded comments needing further elaboration.

The Evolution of Thomas Kuhn

In his contribution, James Marcum argues that Kuhn’s later evolutionary view is more relevant to current philosophy of science (being ‘pluralistic and perspectival’) than his earlier revolutionary one. On Kuhn’s later evolutionary view, Marcum explains, scientific change proceeds via “smaller evolutionary specialization or speciation” (155), with a “gradual emergence of a specialty’s practice and knowledge” (159). On this view, scientific development consists in “small incremental changes of belief” rather than “the upheaval of world-shattering revolutions” (159).

Marcum uses the emergence of bacteriology, virology, and retrovirology to illustrate the strengths and weaknesses of Kuhn’s evolutionary view. Its main strength, he says, is that it illuminates the development of and relationships among these sorts of scientific specialties; its weakness is that it ascribes a single tempo — Darwinian gradualism — and a single mode — speciation — to the evolution of science. Marcum adopts George Gaylord Simpson’s “richer and more textured approach” (165), which distinguishes several tempos and modes. Since these refinements better enable Kuhn’s view to handle a range of cases, they are certainly valuable.

According to Marcum, current philosophy of science is ‘pluralistic and perspectival’ in its recognition that different sciences face different philosophical issues and in its inclusion of perspectives from outside the logico-analytic tradition, such as continental, pragmatist, and feminist perspectives (166). Marcum seems right to characterize current philosophy of science as pluralistic, given the move away from general philosophy of science to more specialized branches.

If this pluralism is to be embraced, one might wonder what role (if any) remains for general philosophy of science. Marcum makes the interesting suggestion that a general image of science, like Kuhn’s evolutionary image, while respecting our contemporary pluralistic stance, can at the same time offer “a type of unity among the sciences, not in terms of reducing them to one science, but rather with respect to mapping the conceptual relationships among them” (169).

One of Marcum’s central aims is to show that incommensurability plays a key explanatory role in a refined version of Kuhn’s evolutionary image of science. The role of incommensurability on this view is to account for scientific speciation. However, Marcum shows only that we can characterize scientific speciation in terms of incommensurability, without clearly establishing the explanatory payoff of so doing. He does not succeed in showing that incommensurability has a particularly enriching explanatory role, much less that incommensurability is “critical for conceptual evolution within the sciences” or “an essential component of… the growth of science” (168).

All a Metaphor?

Barbara Gabriella Renzi and Giulio Napolitano frame their contribution with a discussion of competing accounts of the nature and role of metaphor. They avow the commonly accepted view that metaphors are not merely linguistic, but cognitive, and that they are ubiquitous. They claim, I would think uncontroversially, that metaphors shape how individuals approach and reason about complex issues. They also discuss historical empiricist attitudes toward metaphor, competing views on the role of models and metaphor in science, and later, the potential role of metaphor in social domination.

Renzi and Napolitano also address Kuhn’s use of the metaphor of Darwinian evolution to characterize scientific change. They suggest that an apter metaphor for scientific change can be made of the obsolete orthogenetic hypothesis, according to which “variations are not random but directed by forces regulated and ultimately directed by the internal constitution of the organism, which responds to environmental stimuli” (184).

The orthogenetic metaphor is a better fit for scientific change, they argue, because the emergence of new ideas in science is not random, but driven by “arguments and debates… specific needs of a scientist or group of scientists who have been seeking a solution to a problem” (184).

The orthogenetic metaphor effectively highlights a drawback of the Darwinian metaphor that might otherwise be overlooked, and deserves further attention. The space devoted to discussing metaphor in the abstract contributes little to the paper, beyond prescriptions to take metaphor seriously and approach it with caution. Much of that space would have been better devoted to using historical examples to compare Kuhn’s Darwinian metaphor to the proposed orthogenetic alternative, to make concrete the fruitfulness of the latter, and to flesh out the specific kinds of internal and external pressures that Renzi and Napolitano see as important drivers of scientific change.

Methodological Contextualism

Darrell Rowbottom offers a summary and several criticisms of what he sees as Kuhn’s early-middle period image of science. By way of criticism, he points out that it isn’t clear how to individuate disciplinary matrices in a way that preserves a clear distinction between normal and extraordinary science, or ensures that what Kuhn calls ‘normal science’ is really the norm. Moreover, in linking the descriptive and normative components of his view, Kuhn implausibly assumes that mature science is optimal.

Rowbottom suggests a replacement image of science he calls methodological contextualism (developed more fully in previous work). Methodological contextualism identifies several roles — puzzle-solving, critical, and imaginative — which scientific practitioners fulfill to varying degrees and in varying combinations. The ideal balance of these roles depends on contextual factors, including the scientists available and the state of science (200).

The novel question Rowbottom considers in this paper is: how could piecemeal change in science be rational from the perspective of methodological contextualism? I have difficulty seeing why this is even a prima facie problem for Rowbottom’s view, since puzzle-solving, critical and imaginative activities are clearly consonant with piecemeal change. I suppose it is because the view retains some of Kuhn’s machinery, including his notion of a disciplinary matrix.

At any rate, Rowbottom suggests that scientists may work within a partial disciplinary matrix, or a set of partially overlapping ones. He also makes the intriguing claim that “scientists might allow inconsistency at the global level, and even welcome it as a better alternative than a consistent system with less puzzle-solving power” (202). One might object that Kuhn’s incommensurability thesis seems to block the overlapping matrix move, but Rowbottom proclaims that the falsity of Kuhn’s incommensurability thesis follows “as a consequence of the way that piecemeal change can occur” (201). One person’s modus ponens is another’s modus tollens, as they say.

A Digestible Kuhn

The brevity of the contributions makes them eminently digestible and good potential additions to course syllabi at a range of levels; on the other hand, it means that some of the most provocative and topical themes of the book — such as the epistemic and methodological status of generalizations about science and the role of general philosophy of science in contemporary philosophy — don’t get the full development they deserve. The volume raises more questions than it satisfactorily addresses, but several of them bring renewed relevance and freshness to Kuhnian philosophy of science and ought to direct its future course.

Contact details: amandabryant@trentu.ca

References

Mizrahi, Moti (Ed.) The Kuhnian Image of Science: Time for a Decisive Transformation? Lanham, MD: Rowman & Littlefield, 2018.

Author Information: Robert Piercey, Campion College at the University of Regina, robert.piercey@uregina.ca

Piercey, Robert. “Faraway, So Close: Further Thoughts on Kanonbildung.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 33-38.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Xg

Please refer to:

In the courtyard of Humboldt University, where Georg Hegel taught at the apex of his institutional career.
Image by Joan via Flickr / Creative Commons

 

I’d like to thank Maxim Demin and Alexei Kouprianov for their probing study of Kanonbildung in 19th century Germany. As I understand it, the study has two goals. The first is substantive: to gather and present facts about how a particular philosophical canon emerged in 19th century Germany. The other is methodological: “to develop formalised methods of studying Kanonbildung as a process,” methods which “may turn out to be useful beyond the original scope of our project, in a wide range of possible studies in intellectual history and mechanics of cultural memory formation” (113).

It’s this second goal that I find particularly interesting. So in what follows, I won’t quarrel with the substantive conclusions Demin and Kouprianov draw about the formation of the 19th century German philosophical canon—in part because their conclusions strike me as plausible, and in part because I lack the expertise to challenge their findings. Instead, I’d like to reflect broadly on the methods they use to study Kanonbildung, especially the notion of distant reading which they borrow from Franco Moretti (113). More specifically, I’d like to raise some questions about whether, how, and to what extent their strategy of distant reading must be supplemented by a form of close reading: namely, a form that treats histories of philosophy as literary artifacts whose contents are to be studied by many of the same techniques brought to bear on fictional narratives.

I raise these questions as a philosopher interested in the philosophy of history and in the intersections between philosophy and literature. To be clear, I don’t reject the methods developed by Demin and Kouprianov. On the contrary, I suspect that distant reading has an important role to play in the history of philosophy in general, and in the study of canon formation in particular. But I’d like to suggest that this method becomes more useful when it is supplemented by others—as well as to raise some questions about what this supplementing might look like.

Canon: An Institution of Thought

Let me start by highlighting what I take to be the key points of Demin’s and Kouprianov’s  analysis. They describe themselves as contributing to an institutional history of philosophy: that is, a history that downplays the “conceptual reconstruction” of past views in favour of a “study of practices” (113). The practices that interest them most are the “implicit rules and patterns” (113, emphasis added) that shape philosophers’ understandings of what their activity is and how it should proceed—practices typically not noticed by philosophers themselves. And the epoch that interests them is the 19th century, since it was during this period “that the history of philosophy began its transformation from a generalised body of knowledge into an academic discipline” (112).

A crucial part of this transformation is the development of philosophical canons. Demin and Kouprianov say relatively little about what they think canons are. Very roughly, I take them to be groups of thinkers who are seen as representing the highest and most important achievements of philosophy as a practice, thinkers with whom one should be familiar if one wishes to understand or contribute to philosophy at all.

Furthermore, a canon consists of not just a list of thinkers, but some sort of ranking, some sense—perhaps not fully explicit—of each thinker’s relative importance. In the canon Demin and Kouprianov study, for instance, philosophers are variously described as “primary,” “secondary,” or “tertiary” (116). Understood in this way, canons perform several important functions. They perform sociological functions of “indoctrination and identity formation” (113). By the end of the 19th century in Germany, a familiarity with Kant, Hegel, and others had come to shape philosophers’ understandings of their enterprise to such an extent that it was probably a necessary condition of being considered a philosopher at all.

Canons presumably perform other functions as well—for instance, inspiring philosophers by providing “mountains peaks to look up towards,” in Richard Rorty’s phrase.[1] Canons can change dramatically over time. So if one wants to understand a particular period in the history of philosophy well, it is important to know not just which figures it considered canonical, but how and when its particular canon was formed. That is what Demin and Kouprianov set out to discover about 19th century Germany.

What Is Distant Reading?

As mentioned above, the methods they use to do so go by the name of distant reading. This term was coined by Franco Moretti to designate a particular way of studying literary texts. It is to be opposed to close reading, which privileges the contents of particular texts and engages in “the analysis of ideas and the reconstruction of conceptual schemata” (113). Distant reading focuses instead on the practices “standing behind” these texts, using “formal analytic methods” to uncover “objective characteristics of large amounts of digitised texts” (113).

I take it that the authors see distant reading not as intrinsically superior to all other approaches, but as a way of correcting an imbalance. Their suggestion seems to be that the study of the history of philosophy heretofore has been so dominated by close reading that it has overlooked “implicit rules and patterns” (113). Distant reading nudges the pendulum in the other direction by encouraging historians to pay “closer attention” (113, emphasis added) to previously overlooked practices.

With this goal in mind, Demin and Kouprianov examine a large number of 19th century German works in the history of philosophy, constructing a data set that reveals how often particular philosophers were mentioned and at what length they were discussed. Examining “845 [table of contents] entries for 151 philosophers’ names,” they compile data about the “number of pages devoted to each philosopher” in these works, the “share of the 19th century section devoted to him,” and the “start and end pages of the paragraph and those of the 19th century section” (114).

The result is a very precise snapshot of how much discussion was devoted to certain philosophers at various points in the 19th century—one that allows us to trace the ways in which interest in these figures increased, peaked, and in some cases declined as the century unfolded. It lets us see precisely how and when certain figures came to be seen as more canonical than others.

This approach bears several sorts of fruit. One—in keeping with the authors’ second, methodological goal—is that it spurs the invention of new concepts helpful for making sense of the data. The undertheorized concept of a “philosophical bestseller” (115), for instance, announces itself as important, and can be defined quite precisely as a work published three times or more. Likewise, their approach allows Demin and Kouprianov to develop precise markers of the perceived greatness of philosophers, in terms of “the frequency that a particular name appears across tables of contents” (117). A primary thinker, for instance, can be defined as one “mentioned in more than 80% of treatises” (117).

Other gains are substantive. We learn that the reputations of Kant, Fichte, Schelling, and Hegel were cemented between 1831 and 1855, as the rate at which they were mentioned outpaced that of other thinkers. And we learn that a common view of Schopenhauer—that he was underappreciated in his lifetime and scorned by the philosophical establishment—is false, “with his views being included in three textbooks by 1855” (118). These are important discoveries, and they demonstrate the value of the authors’ strategy of distant reading.

The new museum at Humbolt University.
Image by Bartek Kuzia via Flickr / Creative Commons

 

Shifting Fortunes of Fame

Of course, as Demin and Kouprianov acknowledge, “presence in the canonic history does not tell us much about the part a philosopher played within it” (119). In order to bring this dimension into view, they use several additional techniques. The one I find most intriguing is their examination of where certain philosophers appear in various histories of philosophy, and more specifically, their study of how often various philosophers appear at the end of a history.

The authors focus on three philosophers—Herbart, Schleiermacher, and Fries—who are often discussed in conjunction with Hegel. Then they see how often the figures in question are discussed before Hegel, and how often they are discussed after. “This relative position,” they explain, “is an indirect but a most meaningful criterion which allows to assess the degree of perceived recency and relevancy of a given philosopher. The closer a philosopher stays to the end of the list, the more ‘recent’ and ‘relevant’ to the current debate he is” (123).

This view seems plausible, and in the authors’ hands, it sheds important new light on how these four thinkers were viewed at various points in the 19th century. But we should note that it makes a crucial assumption. In order to move from the premise that a history discusses a given philosopher last to the conclusion that it sees him as most relevant to current debates, we must assume that it tells a particular kind of story: roughly speaking, a progressive story.

We must assume that the historian has organized her data in a very particular way, with the episodes of her story becoming more and more germane to contemporary readers’ concerns as they get closer and closer to them in time. No doubt many, if not most, histories of philosophy actually are stories of this kind. But is a philosopher’s position in a given history a good general clue to her perceived relevance? Is it such a reliable indicator of perceived importance that it should be built into a method intended for use “in a wide range of possible studies in intellectual history” (113)?

Philosophy as a Tradition

I linger over this matter because it raises an important issue in the history of philosophy: the issue of genre. Histories of philosophy, I take it, are narratives, and every narrative belongs to some genre or other.[2] Narratives in different genres may describe the same events in the same order, but assign them different meanings by shaping these events into different sorts of plots. The philosopher who has contributed most to our understanding of this process is Hayden White. In his seminal essay “The Historical Text as Literary Artifact,” White asks us to consider several different ways in which a single series of events might be emplotted. We can imagine a pure chronicle in which the series is “simply recorded in which the events originally occurred” (93); it might be represented in the following way:

  • a, b, c, d, e, …, n[3]

But this series “can be emplotted in a number of different ways and thereby endowed with different meanings without violating the imperatives of the chronological arrangement at all” (92). The following series are all equally possible:

  • A, b, c, d, e, …, n
  • a, B, c, d, e, …, n
  • a, b, C, d, e, …, n
  • a, b, c, D, e, …, n[4]

In each of these series, one event is symbolized with a capital letter to indicate that it is being assigned “explanatory force,”[5] or some other special significance, with respect to the others. Privileging one event rather than another yields stories in different genres. Series (2) would be a “deterministic” history which endows a “putatively original event (a) with the status of a decisive factor (A) in the structuration of the whole series of events following after it.”[6] Were we to privilege the last event in the series, we would have a story in the genre of “eschatological or apocalyptical histories” such as “St. Augustine’s City of God” and “Hegel’s Philosophy of History.”[7]

Many other permutations, and thus many other genres, are possible. In some genres, it is plausible to suppose that the last figure discussed is seen by the author as most relevant to current concerns. But in other genres, this assumption cannot be made. In a history of decline or forgetting, the last figure discussed might well be seen by the author as the least relevant to these concerns. Consider a Heideggerian history of philosophy, in which the last figure discussed is Nietzsche, but the figure most relevant to the contemporary situation is one or another pre-Socratic thinker.

The point is that knowing that a philosopher appears last in a given history—even in a large number of histories—does not tell us much about how the author understood his significance for current concerns. To draw conclusions about significance, we must know the genre (or genres) of the history (or histories) in question. And that is something we can discover only through careful attention to a history’s “literary” features—precisely the features identified through traditional close readings. So while the data Demin and Kouprianov uncover, and the methods they use to do so, are indispensable, I suspect they do not give a full picture of Kanonbildung on their own. They will be most useful when pursued in tandem with certain types of close reading.

Merging Historical Paths

I have no reason to think that Demin and Kouprianov would deny any of this. But I would like to know more about whether, and how, they think it complicates their project. What is the relation between distant reading and close reading? Do these types of analysis simply complement each other, or are they also in tension? I’ve already speculated that the authors see distant reading as a way of correcting an imbalance—that “formal analytic methods” directed at the “objective characteristics… of digitised texts” (113) are called for today because a longstanding bias toward close reading has left historians oblivious to implicit rules and patterns.

If that is the case, is there a danger that performing close reading in conjunction with distant reading will overshadow the distinctive value of the latter? I don’t know the answers to these questions, but I suspect that it will be important to answer them if the methods of this study are to be extended to other areas.

I hasten to add that I am not “for” close reading or “against” distant reading. Distant reading, as the authors describe it, is clearly an important tool. But I would like to know more about how it relates to the other tools at the disposal of historians of philosophy. Whatever their view of this matter, I’d like to thank Demin and Kouprianov again for making a promising new contribution to our conceptual toolbox.

Contact details: robert.piercey@uregina.ca

References

Demin, Maxim, and Alexei Kouprianov, “Studying Kanonbildung: An Exercise in a Distant Reading of Contemporary Self-descriptions of the 19th Century German Philosophy.” Social Epistemology, 32, no. 2: 112-127.

Kuukkanen, Jouni-Matti. Postnarrativist Philosophy of Historiography. Houndmills: Palgrave Macmillan, 2015.

Rorty, Richard “The Historiography of Philosophy: Four Genres,” in Philosophy in History, ed. Richard Rorty, Jerome Schneewind, and Quentin Skinner. Cambridge: Cambridge University Press, 1984.

White, Hayden. “The Historical Text as Literary Artifact,” in Tropics of Discourse: Essays in Cultural Criticism. Baltimore: The Johns Hopkins University Press, 1978.

[1] Richard Rorty, “The Historiography of Philosophy: Four Genres,” in Philosophy in History, ed. Richard Rorty, Jerome Schneewind, and Quentin Skinner (Cambridge: Cambridge University Press, 1984), 23.

[2] Not everyone agrees that all histories are narratives, but space does not permit me to broach this issue here. For an important recent discussion of it, see Jouni-Matti Kuukkanen, Postnarrativist Philosophy of Historiography (Houndmills: Palgrave Macmillan, 2015), especially Chapter 5.

[3] Hayden White, “The Historical Text as Literary Artifact,” in Tropics of Discourse: Essays in Cultural Criticism (Baltimore: The Johns Hopkins University Press, 1978), 92.

[4] White, 92.

[5] White, 92.

[6] White, 93.

[7] White, 93.

Author Information: Manuel Padilla Cruz, University of Seville, mpadillacruz@us.es

Cruz, Manuel Padilla. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 39-50.

Please refer to:

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3RS

Contestants from the 2013 Scripps National Spelling Bee. Image from Scripps National Spelling Bee, via Flickr / Creative Commons

 

Derek Anderson (2017a) has recently differentiated conceptual competence injustice and characterised it as the wrong done when, on the grounds of the vocabulary used in interaction, a person is believed not to have a sophisticated or rich conceptual repertoire. His most interesting, insightful and illuminating work induced me to propose incorporating this notion to the field of linguistic pragmatics as a way of conceptualising an undesired and unexpected perlocutionary effect: attribution of lower level of communicative or linguistic competence. These may be drawn from a perception of seemingly poor performance stemming from lack of the words necessary to refer to specific elements of reality or misuse of the adequate ones (Padilla Cruz 2017a).

Relying on the cognitive pragmatic framework of relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004), I also argued that such perlocutionary effect would be an unfortunate by-product of the constant tendency to search for the optimal relevance of intentional stimuli like single utterances or longer stretches of discourse. More specifically, while aiming for maximum cognitive gain in exchange for a reasonable amount of cognitive effort, the human mind may activate or access assumptions about a language user’s linguistic or communicative performance, and feed them as implicated premises into inferential computations.

Although those assumptions might not really have been intended by the language user, they are made manifest by her[1] behaviour and may be exploited in inference, even if at the hearer’s sole responsibility and risk. Those assumptions are weak implicated premises and their interaction with other mentally stored information yields weakly implicated conclusions (Sperber and Wilson 1986/1995; Wilson and Sperber 2004). Since their content pertains to the speaker’s behaviour, they are behavioural implicatures (Jary 2013); since they negatively impact on an individual’s reputation as a language user, they turn out to be detrimental implicatures (Jary 1998).

My proposal about the benefits of the notion of conceptual competence injustice to linguistic pragmatics was immediately replied by Anderson (2017b). He considers that the intention underlying my comment on his work was “[…] to model conceptual competence injustice within relevance theory” and points out that my proposal “[…] must be tempered with the proper understanding of that phenomenon as a structural injustice” (Anderson 2017b: 36; emphasis in the original). Furthermore, he also claims that relevance theory “[…] does not intrinsically have the resources to identify instances of conceptual competence injustice” (Anderson 2017b: 36).

In what follows, I purport to clarify two issues. Firstly, my suggestion to incorporate conceptual competence injustice into linguistic pragmatics necessarily relies on a much broader, more general and loosened understanding of this notion. Even if such an understanding deprives it of some of its essential, defining conditions –namely, existence of different social identities and of matrices of domination– it may somehow capture the ontology of the unexpected effects that communicative performance may result in: an unfair appraisal of capacities.

Secondly, my intention when commenting on Anderson’s (2017a) work was not actually to model conceptual competence injustice within relevance theory, but to show that this pragmatic framework is well equipped and most appropriate in order to account for the cognitive processes and the reasons underlying the unfortunate negative effects that may be alluded to with the notion I am advocating for. Therefore, I will argue that relevance theory does in fact have the resources to explain why some injustices stemming from communicative performance may originate. To conclude, I will elaborate on the factors why wrong ascriptions of conceptual and lexical competence may be made.

What Is Conceptual Competence Injustice

As a sub-type of epistemic injustice (Fricker 2007), conceptual competence injustice arises in scenarios where there are privileged epistemic agents who (i) are prejudiced against members of specific social groups, identities or minorities, and (ii) exert power as a way of oppression. Such agents make “[…] false judgments of incompetence [which] function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity” (Anderson 2017b: 36). Therefore, conceptual competence injustice is a way of denigrating individuals as knowers of specific domains of reality and ultimately disempowering, discriminating and excluding them, so it “[…] is a form of epistemic oppression […]” (Anderson 2017b: 36).

Lack or misuse of vocabulary may result in wronging if hearers conclude that certain concepts denoting specific elements of reality –objects, animals, actions, events, etc.– are not available to particular speakers or that they have erroneously mapped those concepts onto lexical items. When this happens, speakers’ conceptualising and lexical capacities could be deemed to be below alleged or actual standards. Since lexical competence is one of the pillars of communicative competence (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995), that judgement could contribute to downgrading speakers in an alleged scale of communicative competence and, consequently, to regarding them as partially or fully incompetent.

According to Medina (2011), competence is a comparative and contrastive property. On the one hand, skilfulness in some domain may be compared to that in (an)other domain(s), so a person may be very skilled in areas like languages, drawing, football, etc., but not in others like mathematics, oil painting, basketball, etc. On the other hand, knowledge of and abilities in some matters may be greater or lesser than those of other individuals. Competence, moreover, may be characterised as gradual and context-dependent. Degree of competence –i.e. its depth and width, so to say– normally increases because of age, maturity, personal circumstances and experience, or factors such as instruction and subsequent learning, needs, interests, motivation, etc. In turn, the way in which competence surfaces may be affected by a variety of intertwined factors, which include (Mustajoki 2012; Padilla Cruz 2017b).

Factors Affecting Competence in Communication

Internal factors –i.e. person-related– among which feature:

Relatively stable factors, such as (i) other knowledge and abilities, regardless of their actual relatedness to a particular competence, and (ii) cognitive styles –i.e. patterns of accessing and using knowledge items, among which are concepts and words used to name them.

Relatively unstable factors, such as (i) psychological states like nervousness, concentration, absent-mindedness, emotional override, or simply experiencing feelings like happiness, sadness, depression, etc.; (ii) physiological conditions like tiredness, drowsiness, drunkenness, etc., or (iii) performance of actions necessary for physiological functions like swallowing, sipping, sneezing, etc. These may facilitate or hinder access to and usage of knowledge items including concepts and words.

External –i.e. situation-related– factors, which encompass (i) the spatio-temporal circumstances where encounters take place, and (ii) the social relations with other participants in an encounter. For instance, haste, urgency or (un)familiarity with a setting may ease or impede access to and usage of knowledge items, as may experiencing social distance and/or more or less power with respect to another individual (Brown and Levinson 1987).

While ‘social distance’ refers to (un)acquaintance with other people and (dis)similarity with them as a result of perceptions of membership to a social group, ‘power’ does not simply allude to the possibility of imposing upon others and conditioning their behaviour as a consequence of differing positions in a particular hierarchy within a specific social institution. ‘Power’ also refers to the likelihood to impose upon other people owing to perceived or supposed expertise in a field –i.e. expert power, like that exerted by, for instance, a professor over students– or to admiration of diverse personal attributes –i.e. referent power, like that exerted by, for example, a pop idol over fans (Spencer-Oatey 1996).

There Must Be Some Misunderstanding

Conceptualising capacities, conceptual inventories and lexical competence also partake of the four features listed above: gradualness, comparativeness, contrastiveness and context-dependence. Needless to say, all three of them obviously increase as a consequence of growth and exposure to or participation in a plethora of situations and events, among which education or training are fundamental. Conceptualising capacities and lexical competence may be more or less developed or accurate than other abilities, among which are the other sub-competences upon which communicative competence depends –i.e. phonetics, morphology, syntax and pragmatics (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995).

Additionally, conceptual inventories enabling lexical performance may be rather complex in some domains but not in others –e.g. a person may store many concepts and possess a rich vocabulary pertaining to, for instance, linguistics, but lack or have rudimentary ones about sports. Finally, lexical competence may appear to be higher or lower than that of other individuals under specific spatio-temporal and social circumstances, or because of the influence of the aforesaid psychological and physiological factors, or actions performed while speaking.

Apparent knowledge and usage of general or domain-specific vocabulary may be assessed and compared to those of other people, but performance may be hindered or fail to meet expectations because of the aforementioned factors. If it was considered deficient, inferior or lower than that of other individuals, such consideration should only concern knowledge and usage of vocabulary concerning a specific domain, and be only relative to a particular moment, maybe under specific circumstances.

Unfortunately, people often extrapolate and (over)generalise, so they may take (seeming) lexical gaps at a particular time in a speaker’s life or one-off, occasional or momentary lexical infelicities to suggest or unveil more global and overarching conceptualising handicaps or lexical deficits. This does not only lead people to doubt the richness and broadness of that speaker’s conceptual inventory and lexical repertoire, but also to question her conceptualising abilities and what may be labelled her conceptual accuracy –i.e. the capacity to create concepts that adequately capture nuances in elements of reality and facilitate correct reference to those elements– as well as her lexical efficiency or lexical reliability –i.e. the ability to use vocabulary appropriately.

As long as doubts are cast about the amount and accuracy of the concepts available to a speaker and her ability to verbalise them, there arises an unwarranted and unfair wronging which would count as an injustice about that speaker’s conceptualising skills, amount of concepts and expressive abilities. The loosened notion of conceptual competence injustice whose incorporation into the field of linguistic pragmatics I advocated does not necessarily presuppose a previous discrimination or prejudice negatively biasing hegemonic, privileged or empowered individuals against minorities or identities.

Wrong is done, and an epistemic injustice is therefore inflicted, because another person’s conceptual inventory, lexical repertoire and expressive skills are underestimated or negatively evaluated because of (i) perception of a communicative behaviour that is felt not to meet expectations or to be below alleged standards, (ii) tenacious adherence to those expectations or standards, and (iii) unawareness of the likely influence of various factors on performance. This wronging may nonetheless lead to subsequently downgrading that person as regards her communicative competence, discrediting her conceptual accuracy and lexical efficiency/reliability, and denigrating her as a speaker of a language, and, therefore, as an epistemic agent. Relying on all this, further discrimination on other grounds may ensue or an already existing one may be strengthened and perpetuated.

Relevance Theory and Conceptual Competence Injustice

Initially put forth in 1986, and slightly refined almost ten years later, relevance theory is a pragmatic framework that aims to explain (i) why hearers select particular interpretations out of the various possible ones that utterances may have –all of which are compatible with the linguistically encoded and communicated information– (ii) how hearers process utterances, and (iii) how and why utterances and discourse give rise to a plethora of effects (Sperber and Wilson 1986/1995). Accordingly, it concentrates on the cognitive side of communication: comprehension and the mental processes intervening in it.

Relevance theory (Sperber and Wilson 1986/1995) reacted against the so-called code model of communication, which was deeply entrenched in western linguistics. According to this model, communication merely consists of encoding thoughts or messages into utterances, and decoding these in order to arrive at speaker meaning. Since speakers cannot encode everything they intend to communicate and absolute explicitness is practically unattainable, relevance theory portrays communication as an ostensive-inferential process where speakers draw the audience’s attention by means of intentional stimuli. On some occasions these amount to direct evidence –i.e. showing– of what speakers mean, so their processing requires inference; on other occasions, intentional stimuli amount to indirect –i.e. encoded– evidence of speaker meaning, so their processing relies on decoding.

However, in most cases the stimuli produced in communication combine direct with indirect evidence, so their processing depends on both inference and decoding (Sperber and Wilson 2015). Intentional stimuli make manifest speakers’ informative intention –i.e. the intention that the audience create a mental representation of the intended message, or, in other words, a plausible interpretative hypothesis– and their communicative intention –i.e. the intention that the audience recognise that speakers do have a particular informative intention. The role of hearers, then, is to arrive at speaker meaning by means of both decoding and inference (but see below).

Relevance theory also reacted against philosopher Herbert P. Grice’s (1975) view of communication as a joint endeavour where interlocutors identify a common purpose and may abide by, disobey or flout a series of maxims pertaining to communicative behaviour –those of quantity, quality, relation and manner– which articulate the so-called cooperative principle. Although Sperber and Wilson (1986/1995) seriously question the existence of such principle, they nevertheless rest squarely on a notion already present in Grice’s work, but which he unfortunately left undefined: relevance. This becomes the corner stone in their framework. Relevance is claimed to be a property of intentional stimuli and characterised on the basis of two factors:

Cognitive effects, or the gains resulting from the processing of utterances: (i) strengthening of old information, (ii) contradiction and rejection of old information, and (iii) derivation of new information.

Cognitive or processing effort, which is the effort of memory to select or construct a suitable mental context for processing utterances and to carry out a series of simultaneous tasks that involve the operation of a number of mental mechanisms or modules: (i) the language module, which decodes and parses utterances; (ii) the inferential module, which relates information encoded and made manifest by utterances to already stored information; (iii) the emotion-reading module, which identifies emotional states; (iv) the mindreading module, which attributes mental states, and (v) vigilance mechanisms, which assess the reliability of informers and the believability of information (Sperber and Wilson 1986/1995; Wilson and Sperber 2004; Sperber et al. 2010).

Relevance is a scalar property that is directly proportionate to the amount of cognitive effects that an interpretation gives rise to, but inversely proportionate to the expenditure of cognitive effort required. Interpretations are relevant if they yield cognitive effects in return for the cognitive effort invested. Optimal relevance emerges when the effect-effort balance is satisfactory. If an interpretation is found to be optimally relevant, it is chosen by the hearer and thought to be the intended interpretation. Hence, optimal relevance is the property determining the selection of interpretations.

The Power of Relevance Theory

Sperber and Wilson’s (1986/1995) ideas and claims originated a whole branch in cognitive pragmatics that is now known as relevance-theoretic pragmatics. After years of intense, illuminating and fruitful work, relevance theorists have offered a plausible model for comprehension. In it, interpretative hypotheses –i.e. likely interpretations– are said to be formulated during a process of mutual parallel adjustment of the explicit and implicit content of utterances, where the said modules and mechanisms perform a series of simultaneous, incredibly fast tasks at a subconscious level (Carston 2002; Wilson and Sperber 2004).

Decoding only yields a minimally parsed chunk of concepts that is not yet fully propositional, so it cannot be truth-evaluable: the logical form. This form needs pragmatic or contextual enrichment by means of additional tasks wherein the inferential module relies on contextual information and is sometimes constrained by the procedural meaning –i.e. processing instructions– encoded by some linguistic elements.

Those tasks include (i) disambiguation of syntactic constituents; (ii) assignment of reference to words like personal pronouns, proper names, deictics, etc.; (iii) adjustment of the conceptual content encoded by words like nouns, verbs, adjectives or adverbs, and (iv) recovery of unarticulated constituents. Completion of these tasks results in the lower-level explicature of an utterance, which is a truth-evaluable propositional form amounting to the explicit content of an utterance. Construction of lower-level explicatures depends on decoding and inference, so that the more decoding involved, the more explicit or strong these explicatures are and, conversely, the more inference needed, the less explicit and weaker these explicatures are (Wilson and Sperber 2004).

A lower-level explicature may further be embedded into a conceptual schema that captures the speaker’s attitude(s) towards the proposition expressed, her emotion(s) or feeling(s) when saying what she says, or the action that she intends or expects the hearer to perform by saying what she says. This schema is the higher-level explicature and is also part of the explicit content of an utterance.

It is sometimes built through decoding some of the elements in an utterance –e.g. attitudinal adverbs like ‘happily’ or ‘unfortunately’ (Ifantidou 1992) or performative verbs like ‘order’, ‘apologise’ or ‘thank’ (Austin 1962)– and other times through inference, emotion-reading and mindreading –as in the case of, for instance, interjections, intonation or paralanguage (Wilson and Wharton 2006; Wharton 2009, 2016) or indirect speech acts (Searle 1969; Grice 1975). As in the case of lower-level explicatures, higher-level ones may also be strong or weak depending on the amount of decoding, emotion-reading and mindreading involved in their construction.

The explicit content of utterances may additionally be related to information stored in the mind or perceptible from the environment. Those information items act as implicated premises in inferential processes. If the hearer has enough evidence that the speaker intended or expected him to resort to and use those premises in inference, they are strong, but, if he does so at his own risk and responsibility, they are weak. Interaction of the explicit content with implicated premises yields implicated conclusions. Altogether, implicated premises and implicated conclusions make up the implicit content of an utterance. Arriving at the implicit content completes mutual parallel adjustment, which is a process constantly driven by expectations of relevance, in which the more plausible, less effort-demanding and more effect-yielding possibilities are normally chosen.

The Limits of Relevance Theory

As a model centred on comprehension and interpretation of ostensive stimuli, relevance theory (Sperber and Wilson 1986/1995) does not need to be able to identify instances of conceptual competence injustice, as Anderson (2017b) remarks, nor even instances of the negative consequences of communicative behaviour that may be alluded to by means of the broader, loosened notion of conceptual competence injustice I argued for. Rather, as a cognitive framework, its role is to explain why and how these originate. And, certainly, its notional apparatus and the cognitive machinery intervening in comprehension which it describes can satisfactorily account for (i) the ontology of unwarranted judgements of lexical and conceptual (in)competence, (ii) their origin and (iii) some of the reasons why they are made.

Accordingly, those judgements (i) are implicated conclusions which (ii) are derived during mutual parallel adjustment as a result of (iii) accessing some manifest assumptions and using these as implicated premises in inference. Obviously, the implicated premises that yield the negative conclusions about (in)competence might not have been intended by the speaker, who would not be interested in the hearer accessing and using them. However, her communicative performance makes manifest assumptions alluding to her lexical lacunae and mistakes and these lead the hearer to draw undesired conclusions.

Relevance theory (Sperber and Wilson 1986/1995) is powerful enough to offer a cognitive explanation of the said three issues. And this alone was what I aimed to show in my comment to Anderson’s (2017a) work. Two different issues, nevertheless, are (i) the reasons why certain prejudicial assumptions become manifest to an audience and (ii) why those assumptions end up being distributed across the members of certain wide social groups.

As Anderson (2017b) underlines, conceptual competence injustices must necessarily be contextualised in situations where privileged and empowered social groups are negatively-biased or prejudiced against other identities and create patterns of marginalisation. Prejudice may be argued to bring to the fore a variety of negative assumptions about the members of the identities against whom it is held. Using Giora’s (1997) terminology, prejudice makes certain detrimental assumptions very salient or increases the saliency of those assumptions.

Consequently, they are amenable to being promptly accessed and effortlessly used as implicated premises in deductions, from which negative conclusions are straightforwardly and effortlessly derived. Those premises and conclusions spread throughout the members of the prejudiced and hegemonic group because, according to Sperber’s (1996) epidemiological model of culture, they are repeatedly transmitted or made public. This is possible thanks to two types of factors (Sperber 1996: 84):

Psychological factors, such as their relative easiness of storage, the existence of other knowledge with which they can interact in order to generate cognitive effects –e.g. additional negative conclusions pertaining to the members of the marginalised identity– or existence of compelling reasons to make the individuals in the group willing to transmit them –e.g. desire to disempower and/or marginalise the members of an unprivileged group, to exclude them from certain domains of human activity, to secure a privileged position, etc.

Ecological factors, such as the repetition of the circumstances under which those premises and conclusions result in certain actions –e.g. denigration, disempowerment, maginalisation, exclusion, etc.– availability of storage mechanisms other than the mind –e.g. written documents– or the existence of institutions that transmit and perpetuate those premises and conclusions, thus ensuring their continuity and availability.

Since the members of the dominating biased group find those premises and conclusions useful to their purposes and interests, they constantly reproduce them and, so to say, pass them on to the other members of the group or even on to individuals who do not belong to it. Using Sperber’s (1996) metaphor, repeated production and internalisation of those representations resembles the contagion of illnesses. As a result, those representations end up being part of the pool of cultural representations shared by the members of the group in question or other individuals.

The Imperative to Get Competence Correct

In social groups with an interest in denigrating and marginalising an identity, certain assumptions regarding the lexical inventories and conceptualising abilities of the epistemic agents with that identity may be very salient, or purposefully made very salient, with a view to ensuring that they are inferentially exploited as implicated premises that easily yield negative conclusions. In the case of average speakers’ lexical gaps and mistakes, assumptions concerning their performance and infelicities may also become very salient, be fed into inferential processes and result in prejudicial conclusions about their lexical and conceptual (in)competence.

Although utterance comprehension and information processing end upon completion of mutual parallel adjustment, for the informational load of utterances and the conclusions derivable from them to be added to an individual’s universe of beliefs, information must pass the filters of a series of mental mechanisms that target both informers and information itself, and check their believability and reliability. These mechanisms scrutinise various sources determining trust allocation, such as signs indicating certainty and trustworthiness –e.g. gestures, hesitation, nervousness, rephrasing, stuttering, eye contact, gaze direction, etc.– the appropriateness, coherence and relevance of the dispensed information; (previous) assumptions about speakers’ expertise or authoritativeness in some domain; the socially distributed reputation of informers, and emotions, prejudices and biases (Origgi 2013: 227-233).

As a result, these mechanisms trigger a cautious and sceptic attitude known as epistemic vigilance, which in some cases enables individuals to avoid blind gullibility and deception (Sperber et al. 2010). In addition, these mechanisms monitor the correctness and adequateness of the interpretative steps taken and the inferential routes followed while processing utterances and information, and check for possible flaws at any of the tasks in mutual parallel adjustment –e.g. wrong assignment of reference, supply of erroneous implicated premises, etc.– which would prevent individuals from arriving at actually intended interpretations. Consequently, another cautious and sceptical attitude is triggered towards interpretations, which may be labelled hermeneutical vigilance (Padilla Cruz 2016).

If individuals do not perceive risks of malevolence or deception, or do not sense that they might have made interpretative mistakes, vigilance mechanisms are weakly or moderately activated (Michaelian 2013: 46; Sperber 2013: 64). However, their level of activation may be raised so that individuals exercise external and/or internal vigilance. While the former facilitates higher awareness of external factors determining trust allocation –e.g. cultural norms, contextual information, biases, prejudices, etc.– the latter facilitates distancing from conclusions drawn at a particular moment, backtracking with a view to tracing their origin –i.e. the interpretative steps taken, the assumptions fed into inference and assessment of their potential consequences (Origgi 2013: 224-227).

Exercising weak or moderate vigilance of the conclusions drawn upon perception of lexical lacunae or mistakes may account for their unfairness and the subsequent wronging of individuals as regards their actual conceptual and lexical competence. Unawareness of the internal and external factors that may momentarily have hindered competence and ensuing performance, may cause perceivers of lexical gaps and errors to unquestioningly trust assumptions that their interlocutors’ allegedly poor performance makes manifest, rely on them, supply them as implicated premises, derive conclusions that do not do any justice to their actual level of conceptual and lexical competence, and eventually trust their appropriateness, adequacy or accuracy.

A higher alertness to the potential influence of those factors on performance would block access to the detrimental assumptions made manifest by their interlocutors’ performance or make perceivers of lexical infelicities reconsider the convenience of using those assumptions in deductions. If this was actually the case, perceivers would be deploying the processing strategy labelled cautious optimism, which enables them to question the suitability of certain deductions and to make alternative ones (Sperber 1994).

Conclusion

Relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004) does not need to be able to identify cases of conceptual competence injustice, but its notional apparatus and the machinery that it describes can satisfactorily account for the cognitive processes whereby conceptual competence injustices originate. In essence, prejudice and interests in denigrating members of specific identities or minorities favour the saliency of certain assumptions about their incompetence, which, for a variety of psychological and ecological reasons, may already be part of the cultural knowledge of the members of prejudiced empowered groups. Those assumptions are subsequently supplied as implicated premises to deductions, which yield conclusions that undermine the reputation of the members of the identities or minorities in question. Ultimately, such conclusions may in turn be added to the cultural knowledge of the members of the biased hegemonic group.

The same process would apply to those cases wherein hearers unfairly wrong their interlocutors on the grounds of performance below alleged or expected standards, and are not vigilant enough of the factors that could have impeded it. That wronging may be alluded to by means of a somewhat loosened, broadened notion of ‘conceptual competence injustice’ which deprives it of one of its quintessential conditions: the existence of prejudice and interests in marginalising other individuals. Inasmuch as apparently poor performance may give rise to unfortunate unfair judgements of speakers’ overall level of competence, those judgements could count as injustices. In a nutshell, this was the reason why I advocated for the incorporation of a ‘decaffeinated’ version of Anderson’s (2017a) notion into the field of linguistic pragmatics.

Contact details: mpadillacruz@us.es

References

Anderson, Derek. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017a): 210-223.

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017b): 34-39.

Austin, John L. How to Do Things with Words. Oxford: Clarendon Press, 1962.

Bachman, Lyle F. Fundamental Considerations in Language Testing. Oxford: Oxford University Press, 1990.

Brown, Penelope, and Stephen C. Levinson. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press, 1987.

Canale, Michael. “From Communicative Competence to Communicative Language Pedagogy.” In Language and Communication, edited by Jack C. Richards and Richard W. Schmidt, 2-28. London: Longman, 1983.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Celce-Murcia, Marianne, Zoltán Dörnyei, and Sarah Thurrell. “Communicative Competence: A Pedagogically Motivated Model with Content Modifications.” Issues in Applied Linguistics 5 (1995): 5-35.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Giora, Rachel. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics 8 (1997): 183-206.

Grice, Herbert P., 1975. “Logic and Conversation.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry L. Morgan, 41-58. New York: Academic Press, 1975.

Hymes, Dell H. “On Communicative Competence.” In Sociolinguistics. Selected Readings, edited by John B. Pride and Janet Holmes, 269-293. Baltimore: Penguin Books, 1972.

Ifantidou, Elly. “Sentential Adverbs and Relevance.” UCL Working Papers in Linguistics 4 (1992): 193-214.

Jary, Mark. “Relevance Theory and the Communication of Politeness.” Journal of Pragmatics 30 (1998): 1-19.

Jary, Mark. “Two Types of Implicature: Material and Behavioural.” Mind & Language 28, no. 5 (2013): 638-660.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary.” Social Epistemology: A Journal of Knowledge, Culture and Policy 25, no. 1 (2011): 15-35.

Michaelian, Kourken. “The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication.” Episteme 10, no. 1 (2013): 37-59.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Origgi, Gloria. “Epistemic Injustice and Epistemic Trust.” Social Epistemology: A Journal of Knowledge, Culture and Policy 26, no. 2 (2013): 221-235.

Padilla Cruz, Manuel. “Vigilance Mechanisms in Interpretation: Hermeneutical Vigilance.” Studia Linguistica Universitatis Iagellonicae Cracoviensis 133, no. 1 (2016): 21-29.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Interlocutors-related and Hearer-specific Causes of Misunderstanding: Processing Strategy, Confirmation Bias and Weak Vigilance.” Research in Language 15, no. 1 (2017b): 11-36.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Spencer-Oatey, Hellen D. “Reconsidering Power and Distance.” Journal of Pragmatics 26 (1996): 1-24.

Sperber, Dan. “Understanding Verbal Understanding.” In What Is Intelligence? edited by Jean Khalfa, 179-198. Cambridge: Cambridge University Press, 1994.

Sperber, Dan. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell, 1996.

Sperber, Dan. “Speakers Are Honest because Hearers Are Vigilant. Reply to Kourken Michaelian.” Episteme 10, no. 1 (2013): 61-71.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “Beyond Speaker’s Meaning.” Croatian Journal of Philosophy 15, no. 44 (2015): 117-149.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wharton, Tim. Pragmatics and Non-verbal Communication. Cambridge: Cambridge University Press, 2009.

Wharton, Tim. “That Bloody so-and-so Has Retired: Expressives Revisited.” Lingua 175-176 (2016): 20-35.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn, and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Wilson, Deirdre and Tim Wharton. “Relevance and Prosody.” Journal of Pragmatics 38 (2006): 1559-1579.

[1] Following a relevance-theoretic convention, reference to the speaker will be made through the feminine third person singular personal pronoun, while reference to the hearer will be made through its masculine counterpart.

Author Information: Samuel Rickless, University of California, San Diego, srickless@ucsd.edu

Rickless, Samuel. “Critical Appreciation of Jonathan Schaffer’s The Contrast-Sensitivity of Knowledge Ascriptions’.” Social Epistemology Review and Reply Collective 4, no. 4 (2015): 1-6.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Xu

Please refer to:

Editor’s Note: With Samuel Rickless’ post, we initiate our “critical appreciation” series. In this series, we ask scholars to examine Social Epistemology’s most cited articles over the last 3 years (according to statistics sourced from CrossRef). We seek both a re-appraisal and re-imagining of the articles since their publication, and a sense of where the arguments and ideas might go in the future.

contrast

Image credit: Michael J. Moeller, via flickr

Jonathan Schaffer’s 2008 article is part of a burgeoning trend, one that attempts to uncover previously unrecognized contrastive elements in a wide variety of different relations and properties (including knowledge, causation, freedom, belief, and confirmation of theory by evidence). My aim here is to provide a critical appraisal of the article, with a view to determining what it can teach us about how best to understand knowledge ascriptions, and how best to conduct research in epistemology and the philosophy of language more generally.  Continue Reading…