Archives For judgment

Author Information: Rik Peels, Vrije Universiteit Amsterdam, mail@rikpeels.nl.

Peels, Rik. “Exploring the Boundaries of Ignorance: Its Nature and Accidental Features.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 10-18.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-456

From the Metropolitan United Church in downtown Toronto.
Image by Loozrboy via Flickr / Creative Commons

 

This article responds to El Kassar, Nadja (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance.” Social Epistemology. DOI: 10.1080/02691728.2018.1518498.

As does Bondy, Patrick. “Knowledge and Ignorance, Theoretical and Practical.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 9-14.

Nadja El Kassar is right that different fields in philosophy use rather different conceptions of ignorance. I also agree with her that there seem to be three major conceptions of ignorance: (i) ignorance as propositional ignorance, which she calls the ‘propositional conception of ignorance’, (ii) ignorance as actively upheld false outlooks, which she names the ‘agential conception of ignorance’, and (iii) ignorance as an epistemic practice, which she dubs the ‘structural conception of ignorance’.

It is remarkable that nobody else has addressed the question before of how these three conceptions relate to each other. I consider it a great virtue of her lucid essay that she not only considers this question in detail, but also provides an account that is meant to do justice to all these different conceptions of ignorance. Let us call her account the El Kassar Synthesis. It reads as follows:

Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices).[1]

My reply to her insightful paper is structured as follows. First, I argue that her synthesis needs revision on various important points (§2). After that, I show that, despite her ambition to capture the main varieties of ignorance in her account, there are important kinds of ignorance that the El Kassar Synthesis leaves out (§4).

I then consider the agential and structural conceptions of ignorance and suggest that we should distinguish between the nature of ignorance and its accidental features. I also argue that these two other conceptions of ignorance are best understood as accounts of important accidental features of ignorance (§5). I sketch and reply to four objections that one might level against my account of the nature and accidental features of ignorance (§6).

I conclude that ignorance should be understood as the absence of propositional knowledge or the absence of true belief, the absence of objectual knowledge, or the absence of procedural knowledge. I also conclude that epistemic vices, hermeneutical frameworks, intentional avoidance of evidence, and other important phenomena that the agential and structural conceptions of ignorance draw our attention to, are best understood as important accidental features of ignorance, not as properties that are essential to ignorance.

Preliminaries

Before I explore the tenability of the El Kassar Synthesis in more detail, I would like to make a few preliminary points about it that call for some fine-tuning on her part. Remember that on the El Kassar Synthesis, ignorance should be understood as follows:

El Kassar Synthesis version 1: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices).[2]

It seems to me that this synthesis needs revision on at least three points.

First, a false belief is an epistemic attitude and even a doxastic attitude. Moreover, if – as is widely thought among philosophers – there are exactly three doxastic attitudes, namely belief, disbelief, and suspension of judgment, then any case of ignorance that manifests itself in a doxastic attitude is one in which one lacks a belief about p or one has a false belief about p.

After all, if one holds a false belief and that is manifest in one’s doxastic attitude, it is because one holds a false belief (that is the manifestation). If one holds no belief and that is manifest in one’s doxastic attitudes, it is because one suspends judgment (that is the manifestation). Of course, it is also possible that one is deeply ignorant (e.g, one cannot even consider the proposition), but then it is simply not even manifest in one’s doxastic attitudes.

The reference to doxastic attitudes in the second conjunct is, therefore, redundant. The revised El Kassar Synthesis reads as follows:

El Kassar Synthesis version 2: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (epistemic virtues, epistemic vices).

What is left in the second conjunct after the first revision is epistemic virtues and vices. There is a problem with this, though. Ignorance need not be manifested in any epistemic virtues or vices. True, it happens often enough. But it is not necessary; it does not belong to the essence of being ignorant.

If one is ignorant of the fact that Antarctica is the greatest desert on earth (which is actually a fact), then that may simply be a fairly cognitively isolated, single fact of which one is ignorant. Nothing follows about such substantial cognitive phenomena as intellectual virtues and vices (which are, after all, dispositions) like open-mindedness or dogmatism. A version that takes this point into account reads as follows:

El Kassar Synthesis version 3: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs: either she has no belief about p or a false belief.

A third and final worry I would like to raise here is that on the El Kassar Synthesis, ignorance is a disposition of an epistemic agent that manifests itself in her beliefs—and, as we saw, on versions 1 and 2, in her intellectual character traits (epistemic virtues, epistemic vices). I find this worrisome, because it is widely accepted that virtues and vices are dispositions themselves, and many philosophers have argued this also holds for beliefs.[3]

If so, on the El Kassar Synthesis, ignorance is a disposition that manifests itself in a number of dispositions (beliefs, lack of beliefs, virtues, vices). What sort of thing is ignorance if it is a disposition to manifest certain dispositions? It seems if one is disposed to manifest certain dispositions, one simply has those dispositions and will, therefore, manifest them in the relevant circumstances.

Moreover, virtue or the manifestation of virtue does not seem to be an instance or exemplification of ignorance; at most, this seems to be the case for vices. Open-mindedness, thoroughness, and intellectual perseverance are clearly not manifestations of ignorance.[4] If anything, they are the opposite: manifestations of knowledge, insight, and understanding. An account that takes these points also into account would therefore look as follows:

El Kassar Synthesis version 4: Ignorance is an epistemic agent’s having no belief or a false belief about p.

It seems to me that version 4 is significantly more plausible than version 1. I realize, though, that it is also a significant revision of the original El Kassar Synthesis. My criticisms in what follows will, therefore, also be directed against version 1 of El Kassar’s synthesis.

Propositional, Objectual, and Procedural Ignorance

On the first conception of ignorance that El Kassar explores, the propositional one, ignorance is ignorance of the truth of a proposition. On the Standard View of ignorance, defended by Pierre Le Morvan and others,[5] ignorance is lack of propositional knowledge, whereas on the New View, championed by me and others,[6] ignorance is lack of true belief.

I would like to add that it may more suitable to call these ‘conceptions of propositional ignorance’ rather than ‘positional conceptions of ignorance’. After all, they are explicitly concerned with and limit themselves to situations in which one is ignorant of the truth of one or more propositions; they do not say that all ignorance is ignorance of a proposition.

More importantly, though, we should note that ever since Bertrand Russell, it has been quite common in epistemology to distinguish not only propositional knowledge (or knowledge-that), but also knowledge by acquaintance or objectual knowledge (knowledge-of) and procedural or technical knowledge (knowledge-how).[7]

Examples of knowledge by acquaintance are my knowledge of my fiancée’s lovely personality, my knowledge of the taste of the Scotch whisky Talisker Storm, my knowledge of Southern France, and my knowledge of the smell of fresh raspberries. Examples of technical or procedural knowledge are my knowledge of how to navigate through Amsterdam by bike, my knowledge of how to catch a North Sea cod, my knowledge of how to get the attention of a group of 150 students (the latter, incidentally, suggests that know-how comes in degrees…).

Since ignorance is often taken to be lack of knowledge, it is only natural to consider whether there can also be objectual and technical ignorance. Nikolaj Nottelmann, in a recent piece, has convincingly argued that there are such varieties of ignorance.[8]

The rub is that the El Kassar Synthesis, on all of its four versions, does not capture these two other varieties of ignorance. If one is ignorant of how to ride a bike, it is not so much that one lacks beliefs about p or that one has false beliefs about p (even if it is clear exactly which proposition p is). Also, not knowing how to ride a bike does not seem to come with certain intellectual virtues or vices.

The same is true for objectual ignorance: if I am not familiar with the smell of fresh raspberries, that does not imply any false beliefs or absence of beliefs, nor does it come with intellectual virtues or vices. Objectual and procedural ignorance seem to be sui generis kinds of ignorance.

The following definition does capture these three varieties of ignorance—one that, for obvious reasons, I will call the ‘threefold synthesis’:

Threefold Synthesis: Ignorance is an epistemic agent’s lack of propositional knowledge or lack of true belief, lack of objectual knowledge, or lack of procedural knowledge.[9]

Of course, each of the four versions of the El Kassar Synthesis could be revised so as to accommodate this. As we shall see below, though, we have good reason to formulate the Threefold Synthesis independently from the El Kassar Synthesis.

The Agential and Structural Conceptions of Ignorance

According to El Kassar, there is a second conception of ignorance, not captured in the conception of propositional ignorance but captured in the conception of agential ignorance, namely ignorance as an actively upheld false outlook. This conception has, understandably, been particularly influential in the epistemology of race. Charles Mills, whose contributions to this field have been seminal, defines such ignorance as the absence of beliefs, false belief, or a set of false beliefs, brought about by various factors, such as people’s whiteness in the case of white people, that leads to a variety of behavior, such as avoiding evidence.[10] El Kassar suggests that José Medina, who has also contributed much to this field, defends a conception along these lines as well.[11]

The way Charles Mills phrases things suggests a natural interpretation of such ignorance, though. It is this: ignorance is the lack of belief, false beliefs, or various false beliefs (all captured by the conception of propositional ignorance), brought about or caused by a variety of factors. What these factors are will differ from case to case: people’s whiteness, people’s social power and status, people’s being Western, people’s being male, and people’s being heterosexual.

But this means that the agential conception is not a conception of the nature of ignorance. It grants the nature of ignorance as conceived of by the conception of propositional ignorance spelled out above and then, for obvious reasons, goes on to focus on those cases in which such ignorance has particular causes, namely the kinds of factors I just mentioned.[12]

Remarkably, much of what El Kassar herself says supports this interpretation. For example, she says: “Medina picks out a kind of ignorance, active ignorance, that is fed by epistemic vices – in particular, arrogance, laziness and closed-mindedness.” (p. 3; italics are mine) This seems entirely right to me: the epistemology of race focuses on ignorance with specific, contingent features that are crucially relevant for the debate in that field: (i) it is actively upheld, (ii) it is often, but not always, disbelieving ignorance, (iii) it is fed by epistemic vices, etc.

This is of course all perfectly compatible with the Standard or New Views on Ignorance. Most people’s ignorance of the fact that Antarctica is the largest desert on earth is a clear case of ignorance, but one that is not at all relevant to the epistemology of race.

Unsurprisingly then, even though it clearly is a case of ignorance, it does not meet any of the other, contingent criteria that are so pivotal in critical race theory: (i) it is not actively upheld, (ii) it is deep ignorance rather than disbelieving ignorance (most people have never considered this statement about Antarctica), (iii) it is normally not in any way fed by epistemic vices, such as closed-mindedness, laziness, intellectual arrogance, or dogmatism.

That this is a more plausible way of understanding the nature of ignorance and its accidental features can be seen by considering what is widely regarded as the opposite of ignorance: knowledge. According to most philosophers, to know a particular proposition p is to believe a true proposition p on the basis of some kind of justification in a non-lucky (in some sense of the word) way. That is what it is to know something, that is the nature of knowledge.

But in various cases, knowledge can have all sorts of accidental properties: it can be sought and found or one can stumble upon it, it may be the result of the exercise of intellectual virtue or it may be pretty much automatic (such as in the case of my knowledge that I exist), it may be morally good to know that thing or it may be morally bad (as in the case of a privacy violation), it may be based primarily on the exercise of one’s own cognitive capacities or primarily on those of other people (in some cases of testimony), and so on. If this is the case, then it is only natural to think that the same applies to the opposite of knowledge, namely ignorance, and that we should, therefore, clearly distinguish between its nature and its accidental (sometimes crucially important) features:

The nature of ignorance

Ignorance is the lack of propositional knowledge / the lack of true belief, or the lack of objectual knowledge, or the lack of procedural knowledge.[13]

Accidental, context-dependent features of ignorance

Willful or unintentional;

Individual or collective;

Small-scale (individual propositions) or large-scale (whole themes, topics, areas of life);

Brought about by external factors, such as the government, institutions, or socially accepted frameworks, or internal factors, such as one’s own intellectual vices, background assumptions, or hermeneutic paradigms;

And so on.

According to El Kassar, an advantage of her position is that it tells us how one is ignorant (p. 7). However, an account of, say, knowledge, also need not tell us how a particular person in specific circumstances knows something.[14] Perceptual knowledge is crucially important in our lives, and so is knowledge based on memory, moral knowledge (if there is such a thing), and so on.

It is surely no defect in all the many accounts of knowledge, such as externalism, internalism, reliabilism, internalist externalism, proper functionalism, deontologism, or even knowledge-first epistemology, that they do not tell us how a particular person in specific circumstances knows something. They were never meant to do that.

Clearly, mutatis mutandis, the same point applies to the structural conception of ignorance that plays an important role in agnotology. Agnotology is the field that studies how various institutional structures and mechanisms can intentionally keep people ignorant or make them ignorant or create different kinds of doubt. The ignorance about the effects of smoking brought about and intentionally maintained by the tobacco industry is a well-known example.

Again, the natural interpretation is to say that people are ignorant because they lack propositional knowledge or true belief, they lack objectual knowledge, or they lack procedural knowledge. And they do so because – and this is what agnotology focuses on – it is intentionally brought about or maintained by various institutions, agencies, governments, mechanisms, and so on. Understandably, the field is more interested in studying those accidental features of ignorance than in studying its nature.

Objections and Replies

Before we draw a conclusion, let us consider El Kassar’s objections to a position along the lines I have suggested.[15] First, she suggests that we lose a lot if we reject the agential and structural conceptions of ignorance. We lose such things as: ignorance as a bad practice, the role of epistemic agency, the fact that much ignorance is strategic, and so on. I reply that, fortunately, we do not: those are highly important, but contingent features of ignorance: some cases of ignorance have them, others do not. This leaves plenty of room to study such contingent features of ignorance in critical race theory and agnotology.[16]

Second, she suggests that this account would exclude highly important kinds of ignorance, such as ignorance deliberately constructed by companies. I reply that it does not: it just says that its being deliberately constructed by, say, pharmaceutical companies, is an accidental or contingent feature and that it is not part of the nature of ignorance.

Third, Roget’s Thesaurus, for example, lists knowledge as only one of the antonyms of ignorance. Other options are cognizance, understanding, competence, cultivation, education, experience, intelligence, literacy, talent, and wisdom. I reply that we can make sense of this on my alternative, threefold synthesis: competence, cultivation, education, intelligence, and so on, all come with knowledge and true belief and remove certain kinds of ignorance. Thus, it makes perfect sense that these are mentioned as antonyms of ignorance.

Finally, one may wonder whether my alternative conception enables us to distinguish between Hannah and Kate, as described by El Kassar. Hannah is deeply and willingly ignorant about the high emissions of both carbon and sulfur dioxides of cruise ships (I recently found out that a single cruise trip has roughly the same amount of emission as seven million cars in an average year combined). Kate is much more open-minded, but has simply never considered the issue in any detail.

She is in a state of suspending ignorance regarding the emission of cruise ships. I reply that they are both ignorant, at least propositionally ignorant, but that their ignorance has different, contingent features: Hannah’s ignorance is deep ignorance, Kate’s ignorance is suspending ignorance, Hannah’s ignorance is willing or intentional, Kate’s ignorance is not. These are among the contingent features of ignorance; both are ignorant and, therefore, meet the criteria that I laid out for the nature of ignorance.

The Nature and Accidental Features of Ignorance

I conclude that ignorance is the lack of propositional knowledge or true belief, the lack of objectual knowledge, or the lack of procedural knowledge. That is the nature of ignorance: each case meets this threefold disjunctive criterion. I also conclude that ignorance has a wide variety of accidental or contingent features. Various fields have drawn attention to these accidental or contingent features because they matter crucially in certain debates in those fields. It is not surprising then that the focus in mainstream epistemology is on the nature of ignorance, whereas the focus in agnotology, epistemology of race, feminist epistemology, and various other debates is on those context-dependent features of ignorance.

This is not at all to say that the nature of ignorance is more important than its accidental features. Contingent, context-dependent features of something may be significantly more important. For example, it may well be the case that we have the parents that we have essentially; that we would be someone else if we had different biological parents. If so, that is part of our nature or essence.

And yet, certain contingent and accidental features may matter more to us, such as whether or not our partner loves us. Let us not confuse the nature of something with the accidental features of it that we value or disvalue. If we get this distinction straight, there is no principled reason not to accept the threefold synthesis that I have suggested in this paper as a plausible alternative to El Kassar’s synthesis.[17]

Contact details: mail@rikpeels.nl

References

Driver, Julia. (1989). “The Virtues of Ignorance,” The Journal of Philosophy 86.7, 373-384.

El Kassar, Nadja. (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance”, Social Epistemology, DOI: 10.1080/02691728.2018.1518498.

Le Morvan, Pierre. (2011). “On Ignorance: A Reply to Peels”, Philosophia 39.2, 335-344.

Medina, José. (2013). The Epistemology of Resistance (Oxford: Oxford University Press).

Mills, Charles. (2015). “Global White Ignorance”, in M. Gross and L. McGoey (eds.), Routledge International Handbook of Ignorance Studies (London: Routledge), 217-227.

Nottelmann, Nikolaj. (2015). “Ignorance”, in Robert Audi (ed.), Cambridge Dictionary of Philosophy, 3rd ed. (Cambridge: Cambridge University Press).

Peels, Rik. (2010). “What Is Ignorance?”, Philosophia 38, 57-67.

Peels, Rik. (2014). “What Kind of Ignorance Excuses? Two Neglected Issues”, The Philosophical Quarterly 64 (256), 478–496.

Peels, Rik, ed. 2017. Perspectives on Ignorance from Moral and Social Philosophy (New York: Routledge).

Peels, Rik. (2019). “Asserting Ignorance”, in Sanford C. Goldberg (ed.), Oxford Handbook of Assertion (Oxford: Oxford University Press), forthcoming.

Peels, Rik, and Martijn Blaauw, eds. (2016). The Epistemic Dimensions of Ignorance (Cambridge: Cambridge University Press).

Russell, Bertrand. (1980). The Problems of Philosophy (Oxford: Oxford University Press).

Schwitzgebel, Eric. (2002). “A Phenomenal, Dispositional Account of Belief”, Noûs 36.2, 249-275.

[1] El Kassar 2018, 7.

[2] El Kassar 2018, 7.

[3] E.g. Schwitzgebel 2002.

[4] Julia driver (1989) has argued that certain moral virtues, such as modesty, imply some kind of ignorance. However, moral virtues are different from epistemic virtues and the suggestion that something implies ignorance is different from the idea that something manifests ignorance.

[5] See Le Morvan 2011. See also various essays in Peels and Blaauw 2016; Peels 2017.

[6] See Peels 2010; 2014; 2019. See also various essays in Peels and Blaauw 2016; Peels 2017.

[7] See Russell 1980, 3.

[8] See Nottelmann 2015.

[9] If the Standard View on Ignorance is correct, then one could simply replace this with: Ignorance is a disposition of an epistemic agent that manifests itself in lack of (propositional, objectual, or procedural) knowledge.

[10] See Mills 2015, 217.

[11] See Medina 2013.

[12] El Kassar in her paper mentions Anne Meylan’s suggestion on this point. Anne Meylan has suggested – and confirmed to me in personal correspondence – that we ought to distinguish between the state of being ignorant (which is nicely captured by the Standard View or the New View) and the action or failure to act that induced that state of ignorance (that the agential and structural conceptions of ignorance refer to), such as absence of inquiry or a sloppy way of dealing with evidence. I fully agree with Anne Meylan’s distinction on this point and, as I argue in more detail below, taking this distinction into account can lead to a significantly improved account of ignorance.

[13] The disjunction is meant to be inclusive.

[15] See pp. 4-5 of her paper.

[16] As Anne Meylan has pointed out to me in correspondence, it is generally true that doxastic states are not as such morally bad; whether or not they are depends on their contingent, extrinsic features.

[17] For their helpful comments on earlier versions of this paper, I would like to thank Thirza Lagewaard, Anne Meylan, and Nadja El Kassar.

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink: https://wp.me/p1Bfg0-41m

Image by Walt Stoneburner via Flickr / Creative Commons

 

If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Manuel Padilla Cruz, Universidad de Sevilla, mpadillacruz@us.es

Padilla Cruz, Manuel. “One Thing is Testimonial Injustice and Another Is Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 7, no. 3 (2018): 9-19.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Vi

Please refer to:

Image by Jon Southcoasting via Flickr / Creative Commons

 

Derek E. Anderson’s (2017) identification and characterisation of conceptual competence injustice has recently met some resistance from Podosky and Tuckwell (2017). They have denied the existence of this new type of epistemic injustice on the grounds that the wronging it denotes may be subsumed by testimonial injustice: “instances of conceptual competence injustice can be accurately characterised as instances of testimonial injustices” (Podosky and Tuckwell 2017: 26). Additionally, they have questioned the reasons that led Anderson (2017) to distinguish this epistemic injustice from testimonial, hermeneutical and contributory injustices (Podosky and Tuckwell 2017: 26-30).

Criticising the methodology followed by Podosky and Tuckwell (2017) in their attempt to prove that conceptual competence injustice falls within testimonial injustice, Anderson (2018) has underlined that conceptual competence injustice is a structural injustice and a form of competence injustice –i.e. an unfair misappraisal of skills– which should be retained as a distinct type of epistemic injustice because of its theoretical significance and usefulness. Causal etiology is not a necessary condition on conceptual competence injustice, he explains, and conceptual competence injustice, as opposed to testimonial injustice, need not be perpetrated by social groups that are negatively biased against a particular identity.

The unjust judgements giving rise to it do not necessarily have to be connected with testimony, even though some of them may originate in lexical problems and mistakes in the linguistic expressions a speaker resorts to when dispensing it. Accordingly, testimonial injustice and conceptual competence injustice may be said to be different kinds of injustice and have diverse effects: “It is not necessary that a person’s testimony be disbelieved, ignored, or pre-empted in an episode of CC [conceptual competence] injustice. CC injustice involves only an unjust judgment about a person’s ability to think well using certain concepts” (Anderson 2018: 31).

Welcoming the notion of conceptual competence injustice, I suggested in a previous contribution (Padilla Cruz 2017a) that it could be borrowed by the field of linguistic pragmatics in order to conceptualise an undesired perlocutionary effect of verbal interaction: misappraisals of a speaker’s actual conceptual and lexical abilities as a result of lack or misuse of vocabulary. Relying on Sperber and Wilson’s (1986/1995) description of intentional-input processing as a relevance-driven activity and of comprehension as a process of mutual parallel adjustment, where the mind carries out a series of incredibly fast simultaneous tasks that depend on decoding, inference, mindreading and emotion-reading, I also showed that those misappraisals result from deductions. A speaker’s alleged unsatisfactory performance makes manifest assumptions regarding her[1] problems with words, which are fed as weakly implicated premises to inferential processes and related to other detrimental assumptions that are made salient by prejudice.

In so doing, I did not purport to show, as Podosky and Tuckwell wrongly think, “how epistemic injustice manifests in the field of relevance theory” (2017: 23) or that “conceptual competence injustice is particularly useful in a relevance theoretical model of linguistic pragmatics” (2017: 30). Rather, my intention was to propose introducing the notion of conceptual competence injustice into general linguistic pragmatics as a mere way of labelling a type of prejudicial implicature, as they themselves rightly put it (Podosky and Tuckwell 2017: 30). The derivation of that sort of implicature, however, can be accounted for –and this is where relevance theory comes into the picture– on the basis of the cognitive processes that Sperber and Wilson’s (1986/1995) framework describes and of its conceptual apparatus.

In another contribution (Padilla Cruz 2017b), I clarified that, as a cognitive pragmatic framework, relevance theory (Sperber and Wilson 1986/1995) is concerned with the processing and comprehension of the verbal and non-verbal intentional stimuli produced in human communication. It very satisfactorily explains how hearers forge interpretative hypotheses and why they select only one of them as the plausibly intended interpretation. Relevance theorists are also interested in the generation of a variety of effects –e.g. poetic (Pilkington 2000), humorous (Yus Ramos 2016), etc.– and successfully account for them.

Therefore, the notion of conceptual competence injustice can only be useful to relevance-theoretic pragmatics as a label to refer to one of the (pernicious) effects that may originate as a consequence of the constant search for optimal relevance of intentional stimuli. I will not return to these issues here, as I consider them duly addressed in my previous contribution (Padilla Cruz 2017b).

My aim in this reply is to lend support to Anderson’s (2017) differentiation of conceptual competence injustice as a distinct type of epistemic injustice. I seek to argue that, ontologically and phenomenologically, conceptual competence injustice must be retained in the field of social epistemology as a helpful category of injustice because it refers to a wronging whose origin and scope, so to say, differ from those of testimonial injustice. Testimonial injustice stems from (mis)judgements pertaining to the output of an action or epistemic practice wherein epistemic agents may participate or be engaged. The action in question is giving testimony and its output is the very testimony given. The scope of testimonial injustice, therefore, is the product, the result of that action or epistemic practice.

In other words, testimonial injustice targets the ability to generate an acceptable product as a consequence of finding it not to satisfy certain expectations or requirements, or to be defective in some dimensions. In contrast, conceptual competence injustice denotes an unfairness that is committed not because of the output of what is done with words –i.e. informing and the dispensed information– but because of the very linguistic tools wherewith an individual performs that action –i.e. the very words that she makes use of– and supposed underlying knowledge. To put it differently, the scope of conceptual competence injustice is the lexical items wherewith testimony is dispensed, which lead prejudiced individuals to doubt the conceptual and lexical capacities of unprivileged individuals.

In order to show that the scopes of testimonial and conceptual competence injustices vary, I will be drawing from the seminal and most influential work on communication by philosopher Herbert P. Grice (1957, 1975).[2] This will also encourage me to suggest that the notion of testimonial injustice (Fricker 2003, 2007) could even be refined and elaborated on. I will argue that this injustice may also be perpetrated when a disadvantaged individual is perceived not to meet requirements pertaining to testimony other than truthfulness.

Content Characteristics or Requirements of (Good) Testimony

As an epistemic practice, dispensing testimony, or information, could be characterised, along Grice’s (1959, 1975) lines, as a cooperative activity. Testimony is given because an individual may be interested in imparting it for a variety of reasons –e.g. influencing others, appearing knowledgeable, contradicting previous ideas, etc.– and/or because it may benefit (an)other individual(s), who might (have) solicit(ed) it for another variety of reasons –e.g. learning about something, strengthening ideas, changing his worldview, etc. As an activity that brings together various individuals in joint action, providing testimony is subject to certain constraints or requirements for testimony to be properly or adequately dispensed. Let us call those constraints or requirements, using philosopher John L. Austin’s (1962) terminology, felicity conditions.

Some of those felicity conditions pertain to the individuals or interlocutors engaged in the epistemic practice. The dispenser of testimony –i.e. the speaker or informer– must obviously possess certain (true) information to dispense, have the ability to impart it and pursue some goal when giving it. In turn, the receiver of testimony should, but need not, be interested in it and make this manifest by explicit mention or elicitation of the testimony.

Other felicity conditions concern the testimony to be provided. For instance, it must be well supported, reliable and trustworthy. This is the sort of testimony that benevolent and competent informers dispense (Wilson 1999; Sperber et al. 2010), and the one on which the notion of testimonial injustice focuses (Fricker 2003, 2007). Making use again of Grice’s (1957, 1975) ideas, let us say that, for testimony to be appropriately imparted, it must satisfy a requirement of truthfulness or quality. Indeed, the maxim of quality of his Cooperative Principle prompts individuals to give information that is true and to refrain from saying falsehoods or things for which they lack adequate evidence.

But not only must testimony be truthful; for it to be properly dispensed, the information must also be both sufficient and relevant. Imagine, for instance, that someone was requested to tell the story of Little Red Riding Hood. For the narration to be complete, it should not only include details about who such a character was, where she lived, the fact that she had a grandmother who lived at some distance in the countryside, her grandmother’s conditions or their relationship, but also about what had happened to Little Red Riding Hood’s grandmother one day before receiving her visit and what happened to Little Red Riding Hood upon finding the wolf lying on the bed, disguised as the grandmother.

If the narrator mentioned the former details but omitted the latter, her narration, regardless of the fact that what she said about the characters’ identity and residence was undeniably true, would not be fully satisfactory, as it would not contain enough, necessary or expected information. Her testimony about Little Red Riding Hood would not be considered sufficient; something –maybe a key fragment– was missing for the whole story to be known, correctly understood and appraised.

Imagine now that all the details about the characters, their residence and relationship were present in the narration, but, upon introducing the wolf, the narrator started to ramble and talked about the animal spices wolves belong to, their most remarkable features, the fact that these animals are in danger of extinction in certain regions of Europe or that they were considered to have magical powers in a particular mythology. Although what the narrator said about the three characters is unquestioningly true and the story itself is told in its entirety, it would not have been told in the best way possible, as it includes excessive, unnecessary and unrelated information.

Again, along Gricean (1957, 1975) lines, it may be said that testimony must meet certain requirements or satisfy certain expectations about its quantity and relation. Actually, while his maxim of quantity incites individuals to give the expected amount of information depending on the purpose of a communicative exchange and prevents them from retaining or omitting expected or indispensable information, his maxim of relation causes them to supply information that is relevant or connected with the purpose of the exchange. Even if the provided information is true, failure to satisfy those requirements would render it inadequately given.

To the best of my knowledge, the notion of testimonial injustice as originally formulated by Fricker (2003, 2007) overlooks these requirements of quantity and relation, which solely pertain to the content of what is said. Accordingly, this injustice could also be argued to be amenable to be inflicted whenever an informer imparts unreliable or not well-evidenced information, and also when she fails to add necessary information or mentions irrelevant details or issues. If she did so, her ability to appropriately dispense information could be questioned and she could subsequently be downgraded as an informer.

Testimony from the 2009 trial of Cambodian war criminal Duch. Image by Khmer Rouge Tribunal (ECCC) via Flickr / Creative Commons

 

Manner Characteristics or Requirements of (Good) Testimony

Testimony may be claimed to be adequately given when it is true, sufficient and relevant, but there are additional requirements that testimony should meet for it to be adequately imparted. Namely, the information must be presented in an orderly, clear and unambiguous way. How would you react if, when being told the story of Little Red Riding Hood, your interlocutor gave you all the necessary, relevant and true details –and nothing more– but she changed the order of the events, did not make it clear whom the wolf attacked firstly or what Little Red Riding Hood put in her basket, or resorted to unusual, difficult or imprecise lexical terms? Probably, you would say that the story was told, but many issues would not be crystal clear to you, so you would have difficulties in having a clear picture of how, when and why the events in the story happened.

Testimony may also be considered to be well dispensed when it is given in a good manner by correctly ordering events and avoiding both obscurity and ambiguity of expression. Order, clarity and ambiguity are parameters that do not have to do with what is said –i.e. the content– but with how what is said is said –i.e. its linguistic form. Accordingly, testimony may be asserted to be correctly imparted when it meets certain standards or expectations that only concern the manner in which it is given.[3] Some of those standards or expectations are connected with the properties of the linguistic choices that the speaker makes when wording or phrasing testimony, and others are determined by cultural factors.

For example, for a narration to count as a fairy tale, it would have to begin with the traditional and recurrent formula “Once upon a time” and then proceed by setting a background that enables identification of characters and situates the events. Similarly, for an essay to be regarded as a good, publishable research paper, it must contain, in terms of structure, an abstract, an introductory section where the state of the art of the issue to be discussed is summarised, the goals of the paper are stated, the thesis is alluded to and, maybe, the structure of the paper is explained.

Then, the essay must unfold in a clear and logically connected way, through division of the contents in various sections, each of which must deal with what is referred to in its heading, etc. In terms of expression, the paper must contain technical or specialised terminology and be sufficiently understandable. Many of these expectations are motivated by specific conventions about discourse or text genres.

Inability or failure to present information in the appropriate manner or to comply with operative conventions may also incite individuals to challenge an informer’s capacity to dispense it. Although the informer may be credited with being knowledgeable about a series of issues, she may be assessed as a bad informer because her performance is not satisfactory in terms of the linguistic means she resorts to in order to address them or her abidance by governing conventions. However, since such an assessment is motivated not by the quality, quantity or relation of the content of testimony, but by the tools with and the way in which the informer produces her product, its scope or target is obviously different.

Different Scopes, Distinct Types of Epistemic Injustice

The current notion of testimonial injustice only takes into account one of the three features of (well dispensed) testimony alluded to above: namely, quality or truthfulness. A more fine-grained conceptualisation of it should also consider two other properties: quantity and relation, as long as informers’ capacity to provide testimony may be doubted if they failed to give expected information and/or said irrelevant things or added unnecessary details. Indeed, quality, quantity and relation are dimensions that are connected with the content of the very information dispensed –i.e. what is said– or the product of the epistemic practice of informing. Testimonial injustice, therefore, should be characterised as the epistemic injustice amenable to be inflicted whenever testimony is found deficient or unsatisfactory on the grounds of these three dimensions pertaining to its content.

What happens, then, with the other requirement of good testimony, namely, manner? Again, to the best of my knowledge, Fricker’s (2003, 2007) description of testimonial injustice does not refer to its likely perpetration when an individual is judged not to impart testimony in an allegedly right manner. And, certainly, this characteristic of good testimony may affect considerations about how suitably it is given.

Dispensing information in a messy, obscure and/or ambiguous way could be enough for degrading an individual as informer. She could sufficiently talk about true and relevant things, yes, but she could say them in an inappropriate way, thus hindering or impeding understanding. Should, then, the manner in which testimony is provided be used as grounds to wrong an informer or to question a person’s capacities as such? Although the manner in which testimony is imparted may certainly influence assessments thereof, there is a substantial difference.

Failure to meet requirements of quality, quantity and relation, and failure to meet requirements of manner are certainly not the same phenomenon. The former has to do with the content of what is said, with the product or result of an activity; the latter, in contrast, as the name indicates, has to do with the way in which what is said is actually said, with the tools deployed to accomplish the activity. Testimony may be incorrectly dispensed because of its falsity, insufficiency or irrelevance, but it may also be inappropriately imparted because of how it is given –this is undeniable, I would say.

The difference between quality, quantity and relation, on the one hand, and manner, on the other hand, is a difference of product and content of that product, on the one hand, and tools to create it, on the other hand. Accordingly, testimonial injustice and conceptual competence injustice should be kept apart as two distinct types of epistemic injustice because the respective scopes of the judgements where each injustice originates differ. While in the former the issue is the content of testimony, in the latter what is at stake is the means to dispense it, which unveil or suggest conceptual deficits or lack of mastery of certain concepts.

Testimony is dispensed by means of linguistic elements that somehow capture –or metarepresent, in the specialised cognitive-pragmatic terminology (Wilson 1999; Sperber 2000)– the thoughts that a speaker entertains, or the information that she possesses, and is interested in making known to an audience. Such elements are words, which are meaningful units made of strings of recognisable sounds –i.e. allophones, or contextual realisations of phonemes, in the terminology of phonetics and phonology– which make up stems and various types of morphemesprefixes, infixes and suffixes– conveying lexical and grammatical information. More importantly, words are arranged in more complex meaningful units –namely, phrases– and these, in turn, give rise to larger, and still more meaningful, units –namely, clauses and sentences. Manner is connected with the lexical units chosen and their syntactic arrangements when communicating and, for the sake of this paper, when providing testimony.

Speakers need to constantly monitor their production and their interlocutors’ reactions, which often cause them to revise what they have just said, reformulate what they are saying or are about to say, expand or elaborate on it, etc. As complex an activity as speaking is, it is not exempt of problems. At a lexical level, the speaker may fail to use the adequate words because she misses them or has trouble to find them at a particular time for a variety of factors –e.g. tiredness, absentmindedness, etc. (Mustajoki 2012). The chosen words may also diverge from those normally used by other language users in order to refer to particular concepts. This happens when speakers have mapped those concepts onto different lexical items or when they have mapped those concepts not onto single words, but onto more complex units like phrases or even whole sentences (Sperber and Wilson 1997).

The selected terms may alternatively be too general, so the audience somehow has to inferentially adjust or fine-tune their denotation because of its broadness. Consider, for example, placeholders like “that thing”, “the stuff”, etc. used to refer to something for which there is a more specific term, or hypernyms like ‘animal’ instead of the more precise term ‘duck-billed platypus’. Or, the other way round, the selected terms may be too specific, so the audience somehow has to inferentially loosen their denotation because of its restrictiveness (Carston 2002; Wilson and Carston 2007).

Above – Doggie. Image by lscott2dog via Flickr / Creative Commons

 

Think, for instance, of hyponyms like ‘doggie’ when used to refer not only to dogs, but also to other four-legged animals because of perceptual similarity –they have four legs– and conceptual contiguity –they are all animals– or ‘kitten’ when used to refer to other felines for the same reasons;[4] or imagine that terms like ‘wheel’ or ‘cookie’ were metaphorically applied to entities belonging to different, unrelated conceptual domains –e.g. the Moon– because of perceptual similarity –i.e. roundness.[5]

At a syntactic level, the linguistic structures that the speaker generates may turn out ambiguous and misleading, even though they may be perfectly clear and understandable to her. Consider, for instance, sentences like “I saw your brother with glasses”, where the ambiguity resides in the polysemy of the word ‘glasses’ (“pair of lenses” or “drinking containers”?) and the distinct readings of the fragment “your brother with glasses” (who wears/holds/carries the glasses, the hearer’s brother or the speaker?), or “Flying planes may be dangerous”, where the ambiguity stems from the competing values of the –ing form (what is dangerous, the action of piloting planes or the planes that are flying?).

At a discourse or pragmatic level, finally, speakers may be unaware of conventions governing the usage and meaning of specific structures –i.e. pragmalinguistic structures (Leech 1983)– such as “Can/Could you + verb”, whose pragmatic import is requestive and not a question about the hearer’s physical abilities, or unfamiliar with sociocultural norms and rules –i.e. sociopragmatic norms (Leech 1983)– which establish what is expectable or permitted, so to say, in certain contexts, or when, where, how and with whom certain actions may or should be accomplished or avoided.

Would we, then, say that testimony is to be doubted or discredited because of mistakes or infelicities at a lexical, syntactic or pragmatic level? Not necessarily. The information per se may be true, reliable, accurate, relevant and sufficient, but the problem resides precisely in how it is presented. Testimony would have been given, no doubt, but it would not have been imparted in the most efficient way, as the most appropriate tools are not used.

When lexical selection appears poor or inadequate; words are incorrectly and ambiguously arranged into phrases, clauses or sentences; (expected) conventionalised formulae are not conveniently deployed, or norms constraining how, when, where or whom to say things are not respected or are ignored, what is at stake is not an informer’s knowledge of the issues testimony may be about, but her knowledge of the very rudiments and conventions to satisfactorily articulate testimony and to successfully dispense it. The objects of this knowledge are the elements making up the linguistic system used to communicate –i.e. vocabulary– their possible combinations –i.e. syntax– and their usage in order to achieve specific goals –i.e. pragmatics– so such knowledge is evidently different from knowledge of the substance of testimony –i.e. its ‘aboutness’.

Real or seeming lexical problems may evidence conceptual gaps, concept-word mismatches or (highly) idiosyncratic concept-word mappings, but they may lead privileged individuals to question disadvantaged individuals’ richness of vocabulary and, ultimately, the concepts connected with it and denoted by words. If this happens, what those individuals attack is one of the sets of tools to generate an acceptable product, but not the content or essence of such a product.

Conceptual competence injustice, therefore, must be seen as targeting the tools with which testimony is created, not its content, so its scope differs from that of testimonial injustice. The scope of testimonial injustice is the truthfulness of a series of events in a narration is, as well as the amount of details that are given about those events and the relevance of those details. The scope of conceptual competence, in contrast, is knowledge and correct usage of vocabulary, and possession of the corresponding concepts.

Conceptual competence injustice focuses on a specific type of knowledge making up the broader knowledge of a language and facilitating performance in various practices, which includes informing others or dispensing testimony. Such specific knowledge is a sub-competence on which the more general, overarching competence enabling communicative performance is contingent. For this reason, conceptual competence injustice is a competence injustice, or an unfairness about a type of knowledge and specific abilities –conceptual and lexical abilities, in this case. And just as unprivileged individuals may be wronged because of their lack or misuse of words and may be attributed conceptual lacunae, occasional or constant syntactic problems and pragmatic infelicities may induce powerful individuals to misjudge those individuals as regards the respective types of knowledge enabling their performance in these areas of language.

Conclusion

Phenomenologically, testimonial injustice and conceptual competence injustice are perpetrated as a consequence of perceptions and appraisals whose respective scopes differ. In testimonial injustice, it is information that is deemed to be unsatisfactory because of its alleged veracity, quantity and relevance, so the informer is not considered a good knower of the issues pertaining to that testimony. In conceptual competence injustice, in contrast, it is the tools by means of which information is dispensed that are regarded as inappropriate, and such inappropriateness induces individuals to doubt possession and knowledge of the adequate lexical items and of their corresponding, supporting conceptual knowledge.

While testimonial injustice is inflicted as a result of what is said, conceptual competence injustice is perpetrated as a consequence of the manner whereby what is said is actually said. Consequently, at a theoretical level, testimonial injustice and conceptual competence injustice should definitely be kept apart in the field of social epistemology. The latter, moreover, should be retained as a valid and useful notion, as long as it denotes an unfairness amenable to be sustained on the grounds of the linguistic tools employed to dispense testimony and not on the grounds of the characteristics of the product generated.

Contact details: mpadillacruz@us.es

References

Anderson, Derek E. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017): 210-223.

Anderson, Derek E. “Yes, There Is Such a Thing as Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 26-35.

Austin, John L. How to Do Things with Words. Oxford: Oxford University Press, 1962.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Clark, Eve V. “What’s in a Word? On the Child’s Acquisition of Semantics in His First Language.” In Cognitive Development and the Acquisition of Meaning, edited by Timothy E. Moore, 65-110. New York: Academic Press, 1973.

Clark, Eve V. The Lexicon in Acquisition. Cambridge: Cambridge University Press, 1993.

Escandell Vidal, M. Victoria. “Norms and Principles. Putting Social and Cognitive Pragmatics Together.” In Current Trends in the Pragmatics of Spanish, edited by Rosina Márquez-Reiter and M. Elena Placencia, 347-371. Amsterdam: John Benjamins, 2004.

Fricker, Miranda. “Epistemic Injustice and a Role for Virtue in the Politics of Knowing.” Metaphilosophy 34, no. 1-2 (2003): 154-173.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Grice, Herbert P. “Meaning.” Philosophical Review 66 (1957): 377-388.

Grice, Herbert P. “Logic and Conversation.” In Syntax and Semantics vol. 3: Speech Acts, edited by Peter Cole and Jerry Morgan, 41-59. New York: Academic Press, 1975.

Leech, Geoffrey. Principles of Pragmatics. London: Longman, 1983.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017b): 39-50.

Pilkington, Adrian. Poetic Effects. A Relevance Theory Perspective. Amsterdam: John Benjamins, 2000.

Podosky, Paul-Mikhail, and William Tuckwell. 2017. “There’s No Such Thing as Conceptual Competence Injustice: A Response to Anderson and Cruz.” Social Epistemology Review and Reply Collective 6, no. 11: 23-32.

Rescorla, Leslie. “Overextension in Early Language Development.” Journal of Child Language 7 (1980): 321-335.

Sperber, Dan (ed.). Metarepresentations: A Multidisciplinary Perspective. Oxford: Oxford University Press, 2000.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “The Mapping between the Mental and the Public Lexicon.” UCL Working Papers in Linguistics 9 (1997): 107-125.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wałaszeska, Ewa. “Broadening and Narrowing in Lexical Development: How Relevance Theory Can Account for Children’s Overextensions and Underextensions.” Journal of Pragmatics 43 (2011): 314-326.

Wilson, Deirdre. “Metarepresentation in Linguistic Communication.” UCL Working Papers in Linguistics 11 (1999): 127-161.

Wilson, Deirdre, and Robyn Carston. “A Unitary Approach to Lexical Pragmatics: Relevance, Inference and Ad Hoc Concepts.” In Pragmatics, edited by Noel Burton-Roberts, 230-259. Basingstoke: Palgrave, 2007.

Yus Ramos, Francisco. Humour and Relevance. Amsterdam: John Benjamins, 2016.

[1] Reference to the speaker will be made by means of the feminine third person singular personal pronoun.

[2] The fact that the following discussion heavily relies on Grice’s (1957, 1975) Cooperative Principle and its maxims should not imply that such ‘principle’ is an adequate formalisation of how the human cognitive systems work while processing information. It should rather be seen as some sort of overarching (cultural) norm or rule subsuming more specific norms or rules, which are internalised by some social groups whose members unconsciously obey without noticing that they comply with it (Escandell Vidal 2004: 349). For extensive criticism on Grice’s (1957/1975) ideas, see Sperber and Wilson (1986/1995).

[3] Grice’s (1957, 1975) maxim of manner is articulated into four sub-maxims, which cause individuals to be (i) orderly, (ii) brief or concise, and to avoid (iii) ambiguity of expression and (iv) obscurity of expression. In my discussion, however, I have omitted considerations about brevity or conciseness because I think that these are the byproduct of the maxim of quality, with whose effects those of the manner sub-maxim of briefness overlap.

[4] This would be a type of overextension labelled over-inclusion, categorical overextension or classic overextension (Clark 1973, 1993; Rescorla 1980), where a word “[…] is applied to instances of other categories within the same or adjacent conceptual domain” (Wałaszeska 2011: 321).

[5] This would be a case of analogical extension or analogical overextension (Rescorla 1980; Clark 1993).