Archives For moral and epistemic trustworthiness

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink: https://wp.me/p1Bfg0-41k

Image by Rigoberto Garcia via Flickr / Creative Commons

 

In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink: https://wp.me/p1Bfg0-41m

Image by Walt Stoneburner via Flickr / Creative Commons

 

If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Seungbae Park, Ulsan National Institute of Science and Technology, nature@unist.ac.kr

Park, Seungbae. “Philosophers and Scientists are Social Epistemic Agents.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 31-40.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yo

Please refer to:

The example is from the regime of Hosni Mubarak, but these were the best photos the Digital Editor could find in Creative Commons when he was uploading the piece.

The style of examples common to epistemology, whether social or not, are often innocuous, ordinary situation. But the most critical uses and misuses of knowledge and belief remain all-too-ordinary situations already. If scepticism about our powers to know and believe hold – or are at least held sufficiently – then the most desperate political prisoner has lost her last glimmer of hope. Truth.
Image by Hossam el-Hamalawy via Flickr / Creative Commons

 

In this paper, I reply to Markus Arnold’s comment and Amanda Bryant’s comment on my work “Can Kuhn’s Taxonomic Incommensurability be an Image of Science?” in Moti Mizrahi’s edited collection, The Kuhnian Image of Science: Time for a Decisive Transformation?.

Arnold argues that there is a gap between the editor’s expressed goal and the actual content of the book. Mizrahi states in the introduction that his book aims to increase “our understanding of science as a social, epistemic endeavor” (2018: 7). Arnold objects that it is “not obvious how the strong emphasis on discounting Kuhn’s incommensurability thesis in the first part of the book should lead to a better understanding of science as a social practice” (2018: 46). The first part of the volume includes my work. Admittedly, my work does not explicitly and directly state how it increases our understanding of science as a social enterprise.

Knowledge and Agreement

According to Arnold, an important meaning of incommensurability is “the decision after a long and futile debate to end any further communication as a waste of time since no agreement can be reached,” and it is this “meaning, describing a social phenomenon, which is very common in science” (Arnold, 2018: 46). Arnold has in mind Kuhn’s claim that a scientific revolution is completed not when opposing parties reach an agreement through rational argumentations but when the advocates of the old paradigm die of old age, which means that they do not give up on their paradigm until they die.

I previously argued that given that most recent past paradigms coincide with present paradigms, most present paradigms will also coincide with future paradigms, and hence “taxonomic incommensurability will rarely arise in the future, as it has rarely arisen in the recent past” (Park, 2018: 70). My argument entails that scientists’ decision to end further communications with their opponents has been and will be rare, i.e., such a social phenomenon has been and will be rare.

On my account, the opposite social phenomenon has been and will rather be very common, viz., scientists keep communicating with each other to reach an agreement. Thus, my previous contention about the frequency of scientific revolutions increases our understanding of science as a social enterprise.

Let me now turn to Bryant’s comment on my criticism against Thomas Kuhn’s philosophy of science. Kuhn (1962/1970, 172–173) draws an analogy between the development of science and the evolution of organisms. According to evolutionary theory, organisms do not evolve towards a goal. Similarly, Kuhn argues, science does not develop towards truths. The kinetic theory of heat, for example, is no closer to the truth than the caloric theory of heat is, just as we are no closer to some evolutionary goal than our ancestors were. He claims that this analogy is “very nearly perfect” (1962/1970, 172).

My objection (2018a: 64–66) was that it is self-defeating for Kuhn to use evolutionary theory to justify his philosophical claim about the development of science that present paradigms will be replaced by incommensurable future paradigms. His philosophical view entails that evolutionary theory will be superseded by an incommensurable alternative, and hence evolutionary theory is not trustworthy. Since his philosophical view relies on this untrustworthy theory, it is also untrustworthy, i.e., we ought to reject his philosophical view that present paradigms will be displaced by incommensurable future paradigms.

Bryant replies that “Kuhn could adopt the language of a paradigm (for the purposes of drawing an analogy, no less!) without committing to the literal truth of that paradigm” (2018: 3). On her account, Kuhn could have used the language of evolutionary theory without believing that evolutionary theory is true.

Can We Speak a Truth Without Having to Believe It True?

Bryant’s defense of Kuhn’s position is brilliant. Kuhn would have responded exactly as she has, if he had been exposed to my criticism above. In fact, it is a common view among many philosophers of science that we can adopt the language of a scientific theory without committing to the truth of it.

Bas van Fraassen, for example, states that “acceptance of a theory involves as belief only that it is empirically adequate” (1980: 12). He also states that if “the acceptance is at all strong, it is exhibited in the person’s assumption of the role of explainer” (1980: 12). These sentences indicate that according to van Fraassen, we can invoke a scientific theory for the purpose of explaining phenomena without committing to the truth of it. Rasmus Winther (2009: 376), Gregory Dawes (2013: 68), and Finnur Dellsén (2016: 11) agree with van Fraassen on this account.

I have been pondering this issue for the past several years. The more I reflect upon it, however, the more I am convinced that it is problematic to use the language of a scientific theory without committing to the truth of it. This thesis would be provocative and objectionable to many philosophers, especially to scientific antirealists. So I invite them to consider the following two thought experiments.

First, imagine that an atheist uses the language of Christianity without committing to the truth of it (Park, 2015: 227, 2017a: 60). He is a televangelist, saying on TV, “If you worship God, you’ll go to heaven.” He converts millions of TV viewers into Christianity. As a result, his church flourishes, and he makes millions of dollars a year. To his surprise, however, his followers discover that he is an atheist.

They request him to explain how he could speak as if he were a Christian when he is an atheist. He replies that he can use the language of Christianity without believing that it conveys truths, just as scientific antirealists can use the language of a scientific theory without believing that it conveys the truth.

Second, imagine that scientific realists, who believe that our best scientific theories are true, adopts Kuhn’s philosophical language without committing to Kuhn’s view of science. They say, as Kuhn does, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” Kuhn requests them to explain how they could speak as if they were Kuhnians when they are not Kuhnians. They reply that they can adopt his philosophical language without committing to his view of science, just as scientific antirealists can adopt the language of a scientific theory without committing to the truth of it.

The foregoing two thought experiments are intended to be reductio ad absurdum. That is, my reasoning is that if it is reasonable for scientific antirealists to speak the language of a scientific theory without committing to the truth of it, it should also be reasonable for the atheist to speak the language of Christianity and for scientific realists to speak Kuhn’s philosophical language. It is, however, unreasonable for them to do so.

Let me now diagnose the problems with the atheist’s speech acts and scientific realists’ speech acts. The atheist’s speech acts go contrary to his belief that God does not exist, and scientific realists’ speech acts go contrary to their belief that our best scientific theories are true. As a result, the atheist’s speech acts mislead his followers into believing that he is Christian. The scientific realists’ speech acts mislead their hearers into believing that they are Kuhnians.

Moore’s Paradox

Such speech acts raise an interesting philosophical issue. Imagine that someone says, “Snow is white, but I don’t believe snow is white.” The assertion of such a sentence involves Moore’s paradox. Moore’s paradox arises when we say a sentence of the form, “P, but I don’t believe p” (Moore, 1993: 207–212). We can push the atheist above to be caught in Moore’s paradox. Imagine that he says, “If you worship God, you’ll go to heaven.” We request him to declare whether he believes or not what he just said. He declares, “I don’t believe if you worship God, you’ll go to heaven.” As a result, he is caught in Moore’s paradox, and he only puzzles his audience.

The same is true of the scientific realists above. Imagine that they say, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” We request them to declare whether they believe or not what they just said. They declare, “I don’t believe successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” As a result, they are caught in Moore’s paradox, and they only puzzle their audience.

Kuhn would also be caught in Moore’s paradox if he draws the analogy between the development of science and the evolution of organisms without committing to the truth of evolutionary theory, pace Bryant. Imagine that Kuhn says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths. I, however, don’t believe organisms don’t evolve towards a goal.” He says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths” in order to draw the analogy between the development of science and the evolution of organisms. He says, “I, however, don’t believe organisms don’t evolve towards a goal,” in order to express his refusal to believe that evolutionary theory is true. It is, however, a Moorean sentence: “Organisms don’t evolve towards a goal. I, however, don’t believe organisms don’t evolve towards a goal.” The assertion of such a sentence gives rise to Moore’s paradox.

Scientific antirealists would also be caught in Moore’s paradox, if they explain phenomena in terms of a scientific theory without committing to the truth of it, pace van Fraassen. Imagine that scientific antirealists say, “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” They say, “The space between two galaxies expands because dark energy exists between them,” in order to explain why the space between galaxies expands.

They add, “I don’t believe that dark energy exists between two galaxies,” in order to express their refusal to commit to the truth of the theoretical claim that dark energy exists. It is, however, a Moorean sentence: “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” Asserting such a sentence will only puzzle their audience. Consequently, Moore’s paradox bars scientific antirealists from invoking scientific theories to explain phenomena (Park, 2017b: 383, 2018b: Section 4).

Researchers on Moore’s paradox believe that “contradiction is at the heart of the absurdity of saying a Moorean sentence, but it is not obvious wherein contradiction lies” (Park, 2014: 345). Park (2014: 345) argues that when you say, “Snow is white,” your audience believe that you believe that snow is white. Their belief that you believe that snow is white contradicts the second conjunct of your Moorean sentence that you do not believe that snow is white.

Thus, the contradiction lies in your audience’s belief and the second conjunct of your Moorean sentence. The present paper does not aim to flesh out and defend this view of wherein lies the contradiction. It rather aims to show that Moore’s paradox prevents us from using the language of a scientific theory without committing to the truth of it, pace Bryant and van Fraassen.

The Real Consequences of Speaking What You Don’t Believe

Set Moore’s paradox aside. Let me raise another objection to Bryant and van Fraassen. Imagine that Kuhn encounters a philosopher of mind. The philosopher of mind asserts, “A mental state is reducible to a brain state.” Kuhn realizes that the philosopher of mind espouses the identity theory of mind, but he knows that the identity theory of mind has already been refuted by the multiple realizability argument. So he brings up the multiple realizability argument to the philosopher of mind. The philosopher of mind is persuaded of the multiple realizability argument and admits that the identity theory is not tenable.

To Kuhn’s surprise, however, the philosopher of mind claims that when he said, “A mental state is reducible to a brain state,” he spoke the language of the identity theory without committing to the truth of it, so his position is not refuted by Kuhn. Note that the philosopher of mind escapes the refutation of his position by saying that he did not believe what he stated. It is also reasonable for the philosopher of mind to escape the refutation of his position by saying that he did not believe what he stated, if it is reasonable for Kuhn to escape the refutation of his position by saying that he did not believe what he stated. Kuhn would think that it is not reasonable for the philosopher of mind to do so.

Kuhn, however, might bite the bullet, saying that it is reasonable for the philosopher of mind to do so. The strategy to avoid the refutation, Kuhn might continue, only reveals that the identity theory was not his position after all. Evaluating arguments does not require that we identify the beliefs of the authors of arguments. In philosophy, we only need to care about whether arguments are valid or invalid, sound or unsound, strong or weak, and so on.

Speculating about what beliefs the authors of arguments hold as a way of evaluating arguments is to implicitly rely on an argument from authority, i.e., it is to think as though the authors’ beliefs determine the strength of arguments rather than the form and content of arguments do.

We, however, need to consider under what conditions we accept the conclusion of an argument in general. We accept it, when premises are plausible and when the conclusion follows from the premises. We can tell whether the conclusion follows from the premises or not without the author’s belief that it does. In many cases, however, we cannot tell whether premises are plausible or not without the author’s belief that they are.

Imagine, for example, that a witness states in court that a defendant is guilty because the defendant was in the crime scene. The judge can tell whether the conclusion follows from the premise or not without the witness’s belief that it does. The judge, however, cannot tell whether the premise is plausible or not without the witness’s belief that it is. Imagine that the witness says that the defendant is guilty because the defendant was in the crime scene, but that the witness declares that he does not believe that the defendant was in the crime scene. Since the witness does not believe that the premise is true, the judge has no reason to believe that it is true. It is unreasonable for the judge to evaluate the witness’s argument independently of whether the witness believes or not that the premise is true.

In a nutshell, an argument loses its persuasive force, if the author of the argument does not believe that premises are true. Thus, if you aim to convince your audience that your argument is cogent, you should believe yourself that the premises are true. If you declare that you do not believe that the premises are true, your audience will ask you some disconcerting questions: “If you don’t, why should I believe what you don’t? How can you say to me what you don’t believe? Do you expect me to believe what you don’t?” (Park, 2018b: Section 4).

In case you still think that it is harmless and legitimate to speak what you do not believe, I invite you to imagine that your political rival commits murder to frame you. A false charge is brought to you, and you are tried in court. The prosecutor has a strong indictment against you. You state vehemently that you did not commit murder. You, however, have no physical evidence supporting your statement. Furthermore, you are well-known as a person who speaks vehemently what you do not believe. Not surprisingly, the judge issues a death sentence on you, thinking that you are merely speaking the language of the innocent. The point of this sad story is that speaking what you do not believe may result in a tragedy in certain cases.

A Solution With a Prestigious Inspiration

Let me now turn to a slightly different, but related, issue. Under what condition can I refute your belief when you speak contrary to what you believe? I can do it only when I have direct access to your doxastic states, i.e., only when I can identify your beliefs without the mediation of your language. It is not enough for me to interpret your language correctly and present powerful evidence against what your language conveys.

After all, whenever I present such evidence to you, you will escape the refutation of what you stated simply by saying that you did not believe what you stated. Thus, Bryant’s defense of Kuhn’s position from my criticism above amounts to imposing an excessively high epistemic standard on Kuhn’s opponents. After all, his opponents do not have direct access to his doxastic states.

In this context, it is useful to be reminded of the epistemic imperative: “Act only on an epistemic maxim through which you can at the same time will that it should become a universal one” (Park, 2018c: 3). Consider the maxim “Escape the refutation of your position by saying you didn’t believe what you stated.” If you cannot will this maxim to become a universal one, you ought not to act on it yourself. It is immoral for you to act on the maxim despite the fact that you cannot will it to become a universal maxim. Thus, the epistemic imperative can be invoked to argue that Kuhn ought not to use the language of evolutionary theory without committing to the truth of it, pace Bryant.

Let me now raise a slightly different, although related, issue. Recall that according to Bryant, Kuhn could adopt the language of evolutionary theory without committing to the truth of it. Admittedly, there is an epistemic advantage of not committing to the truth of evolutionary theory on Kuhn’s part. The advantage is that he might avoid the risk of forming a false belief regarding evolutionary theory. Yet, he can stick to his philosophical account of science according to which science does not develop towards truths, and current scientific theories will be supplanted by incommensurable alternatives.

There is, however, an epistemic disadvantage of not committing to the truth of a scientific theory. Imagine that Kuhn is not only a philosopher and historian of science but also a scientist. He has worked hard for several decades to solve a scientific problem that has been plaguing an old scientific theory. Finally, he hits upon a great scientific theory that handles the recalcitrant problem. His scientific colleagues reject the old scientific theory and accept his new scientific theory, i.e., a scientific revolution occurs.

He becomes famous not only among scientists but also among the general public. He is so excited about his new scientific theory that he believes that it is true. Some philosophers, however, come along and dispirit him by saying that they do not believe that his new theory is true, and that they do not even believe that it is closer to the truth than its predecessor was. Kuhn protests that his new theory has theoretical virtues, such as accuracy, simplicity, and fruitfulness. Not impressed by these virtues, however, the philosophers reply that science does not develop towards truths, and that his theory will be displaced by an incommensurable alternative. They were exposed to Kuhn’s philosophical account of science!

Epistemic Reciprocation

They have adopted a philosophical position called epistemic reciprocalism according to which “we ought to treat our epistemic colleagues, as they treat their epistemic agents” (Park, 2017a: 57). Epistemic reciprocalists are scientific antirealists’ true adversaries. Scientific antirealists refuse to believe that their epistemic colleagues’ scientific theories are true for fear that they might form false beliefs.

In return, epistemic reciprocalists refuse to believe that scientific antirealists’ positive theories are true for fear that they might form false beliefs. We, as epistemic agents, are not only interested in avoiding false beliefs but also in propagating “to others our own theories which we are confident about” (Park, 2017a: 58). Scientific antirealists achieve the first epistemic goal at the cost of the second epistemic goal.

Epistemic reciprocalism is built upon the foundation of social epistemology, which claims that we are not asocial epistemic agents but social epistemic agents. Social epistemic agents are those who interact with each other over the matters of what to believe and what not to believe. So they take into account how their interlocutors treat their epistemic colleagues before taking epistemic attitudes towards their interlocutors’ positive theories.

Let me now turn to another of Bryant’s defenses of Kuhn’s position. She says that it is not clear that the analogy between the evolution of organisms and the development of science is integral to Kuhn’s account. Kuhn could “have ascribed the same characteristics to theory change without referring to evolutionary theory at all” (Bryant, 2018: 3). In other words, Kuhn’s contention that science does not develop towards truths rises or falls independently of the analogy between the development of science and the evolution of organisms. Again, this defense of Kuhn’s position is brilliant.

Consider, however, that the development of science is analogous to the evolution of organisms, regardless of whether Kuhn makes use of the analogy to defend his philosophical account of science or not, and that the fact that they are analogous is a strike against Kuhn’s philosophical account of science. Suppose that Kuhn believes that science does not develop towards truths, but that he does not believe that organisms do not evolve towards a goal, despite the fact that the development of science is analogous to the evolution of organisms.

An immediate objection to his position is that it is not clear on what grounds he embraces the philosophical claim about science, but not the scientific claim about organisms, when the two claims parallel each other. It is ad hoc merely to suggest that the scientific claim is untrustworthy, but that the philosophical claim is trustworthy. What is so untrustworthy about the scientific claim, but so trustworthy about the philosophical claim? It would be difficult to answer these questions because the development of science and the evolution of organisms are similar to each other.

A moral is that if philosophers reject our best scientific theories, they cannot make philosophical claims that are similar to what our best scientific theories assert. In general, the more philosophers reject scientific claims, the more impoverished their philosophical positions will be, and the heavier their burdens will be to prove that their philosophical claims are dissimilar to the scientific claims that they reject.

Moreover, it is not clear what Kuhn could say to scientists who take the opposite position in response to him. They believe that organisms do not evolve towards a goal, but refuse to believe that science does not develop towards truths. To go further, they trust scientific claims, but distrust philosophical claims. They protest that it is a manifestation of philosophical arrogance to suppose that philosophical claims are worthy of beliefs, but scientific claims are not.

This possible response to Kuhn reminds us of the Golden Rule: Treat others as you want to be treated. Philosophers ought to treat scientists as they want to be treated, concerning epistemic matters. Suppose that a scientific claim is similar to a philosophical claim. If philosophers do not want scientists to hold a double standard with respect to the scientific and philosophical claims, philosophers should not hold a double standard with respect to them.

There “is no reason for thinking that the Golden Rule ranges over moral matters, but not over epistemic matters” (Park, 2018d: 77–78). Again, we are not asocial epistemic agents but social epistemic agents. As such, we ought to behave in accordance with the epistemic norms governing the behavior of social epistemic agents.

Finally, the present paper is intended to be critical of Kuhn’s philosophy of science while enshrining his insight that science is a social enterprise, and that scientists are social epistemic agents. I appealed to Moore’s paradox, epistemic reciprocalism, the epistemic imperative, and the Golden Rule in order to undermine Bryant’s defenses of Kuhn’s position from my criticism. All these theoretical resources can be used to increase our understanding of science as a social endeavor. Let me add to Kuhn’s insight that philosophers are also social epistemic agents.

Contact details: nature@unist.ac.kr

References

Arnold, Markus. “Is There Anything Wrong with Thomas Kuhn?”, Social Epistemology Review and Reply Collective 7, no. 5 (2018): 42–47.

Byrant, Amanda. “Each Kuhn Mutually Incommensurable”, Social Epistemology Review and Reply Collective 7, no. 6 (2018): 1–7.

Dawes, Gregory. “Belief is Not the Issue: A Defence of Inference to the Best Explanation”, Ratio: An International Journal of Analytic Philosophy 26, no. 1 (2013): 62–78.

Dellsén, Finnur. “Understanding without Justification or Belief”, Ratio: An International Journal of Analytic Philosophy (2016). DOI: 10.1111/rati.12134.

Kuhn, Thomas. The Structure of Scientific Revolutions. 2nd ed. The University of Chicago Press, (1962/1970).

Mizrahi, Moti. “Introduction”, In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018): 1–22.

Moore, George. “Moore’s Paradox”, In G.E. Moore: Selected Writings. Baldwin, Thomas (ed.), London: Routledge, (1993).

Park, Seungbae. “On the Relationship between Speech Acts and Psychological States”, Pragmatics and Cognition 22, no. 3 (2014): 340–351.

Park, Seungbae. “Accepting Our Best Scientific Theories”, Filosofija. Sociologija 26, no. 3 (2015): 218–227.

Park, Seungbae. “Defense of Epistemic Reciprocalism”, Filosofija. Sociologija 28, no. 1 (2017a): 56–64.

Park, Seungbae. “Understanding without Justification and Belief?” Principia: An International Journal of Epistemology 21, no. 3 (2017b): 379–389.

Park, Seungbae. “Can Kuhn’s Taxonomic Incommensurability Be an Image of Science?” In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018a): 61–74.

Park, Seungbae. “Should Scientists Embrace Scientific Realism or Antirealism?”, Philosophical Forum (2018b): (to be assigned).

Park, Seungbae. “In Defense of the Epistemic Imperative”, Axiomathes (2018c). DOI: https://doi.org/10.1007/s10516-018-9371-9.

Park, Seungbae. “The Pessimistic Induction and the Golden Rule”, Problemos 93 (2018d): 70–80.

van Fraassen, Bas. The Scientific Image. Oxford: Oxford University Press, (1980).

Winther, Rasmus. “A Dialogue”, Metascience 18 (2009): 370–379.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Manuel Padilla Cruz, University of Seville, mpadillacruz@us.es

Cruz, Manuel Padilla. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 39-50.

Please refer to:

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3RS

Contestants from the 2013 Scripps National Spelling Bee. Image from Scripps National Spelling Bee, via Flickr / Creative Commons

 

Derek Anderson (2017a) has recently differentiated conceptual competence injustice and characterised it as the wrong done when, on the grounds of the vocabulary used in interaction, a person is believed not to have a sophisticated or rich conceptual repertoire. His most interesting, insightful and illuminating work induced me to propose incorporating this notion to the field of linguistic pragmatics as a way of conceptualising an undesired and unexpected perlocutionary effect: attribution of lower level of communicative or linguistic competence. These may be drawn from a perception of seemingly poor performance stemming from lack of the words necessary to refer to specific elements of reality or misuse of the adequate ones (Padilla Cruz 2017a).

Relying on the cognitive pragmatic framework of relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004), I also argued that such perlocutionary effect would be an unfortunate by-product of the constant tendency to search for the optimal relevance of intentional stimuli like single utterances or longer stretches of discourse. More specifically, while aiming for maximum cognitive gain in exchange for a reasonable amount of cognitive effort, the human mind may activate or access assumptions about a language user’s linguistic or communicative performance, and feed them as implicated premises into inferential computations.

Although those assumptions might not really have been intended by the language user, they are made manifest by her[1] behaviour and may be exploited in inference, even if at the hearer’s sole responsibility and risk. Those assumptions are weak implicated premises and their interaction with other mentally stored information yields weakly implicated conclusions (Sperber and Wilson 1986/1995; Wilson and Sperber 2004). Since their content pertains to the speaker’s behaviour, they are behavioural implicatures (Jary 2013); since they negatively impact on an individual’s reputation as a language user, they turn out to be detrimental implicatures (Jary 1998).

My proposal about the benefits of the notion of conceptual competence injustice to linguistic pragmatics was immediately replied by Anderson (2017b). He considers that the intention underlying my comment on his work was “[…] to model conceptual competence injustice within relevance theory” and points out that my proposal “[…] must be tempered with the proper understanding of that phenomenon as a structural injustice” (Anderson 2017b: 36; emphasis in the original). Furthermore, he also claims that relevance theory “[…] does not intrinsically have the resources to identify instances of conceptual competence injustice” (Anderson 2017b: 36).

In what follows, I purport to clarify two issues. Firstly, my suggestion to incorporate conceptual competence injustice into linguistic pragmatics necessarily relies on a much broader, more general and loosened understanding of this notion. Even if such an understanding deprives it of some of its essential, defining conditions –namely, existence of different social identities and of matrices of domination– it may somehow capture the ontology of the unexpected effects that communicative performance may result in: an unfair appraisal of capacities.

Secondly, my intention when commenting on Anderson’s (2017a) work was not actually to model conceptual competence injustice within relevance theory, but to show that this pragmatic framework is well equipped and most appropriate in order to account for the cognitive processes and the reasons underlying the unfortunate negative effects that may be alluded to with the notion I am advocating for. Therefore, I will argue that relevance theory does in fact have the resources to explain why some injustices stemming from communicative performance may originate. To conclude, I will elaborate on the factors why wrong ascriptions of conceptual and lexical competence may be made.

What Is Conceptual Competence Injustice

As a sub-type of epistemic injustice (Fricker 2007), conceptual competence injustice arises in scenarios where there are privileged epistemic agents who (i) are prejudiced against members of specific social groups, identities or minorities, and (ii) exert power as a way of oppression. Such agents make “[…] false judgments of incompetence [which] function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity” (Anderson 2017b: 36). Therefore, conceptual competence injustice is a way of denigrating individuals as knowers of specific domains of reality and ultimately disempowering, discriminating and excluding them, so it “[…] is a form of epistemic oppression […]” (Anderson 2017b: 36).

Lack or misuse of vocabulary may result in wronging if hearers conclude that certain concepts denoting specific elements of reality –objects, animals, actions, events, etc.– are not available to particular speakers or that they have erroneously mapped those concepts onto lexical items. When this happens, speakers’ conceptualising and lexical capacities could be deemed to be below alleged or actual standards. Since lexical competence is one of the pillars of communicative competence (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995), that judgement could contribute to downgrading speakers in an alleged scale of communicative competence and, consequently, to regarding them as partially or fully incompetent.

According to Medina (2011), competence is a comparative and contrastive property. On the one hand, skilfulness in some domain may be compared to that in (an)other domain(s), so a person may be very skilled in areas like languages, drawing, football, etc., but not in others like mathematics, oil painting, basketball, etc. On the other hand, knowledge of and abilities in some matters may be greater or lesser than those of other individuals. Competence, moreover, may be characterised as gradual and context-dependent. Degree of competence –i.e. its depth and width, so to say– normally increases because of age, maturity, personal circumstances and experience, or factors such as instruction and subsequent learning, needs, interests, motivation, etc. In turn, the way in which competence surfaces may be affected by a variety of intertwined factors, which include (Mustajoki 2012; Padilla Cruz 2017b).

Factors Affecting Competence in Communication

Internal factors –i.e. person-related– among which feature:

Relatively stable factors, such as (i) other knowledge and abilities, regardless of their actual relatedness to a particular competence, and (ii) cognitive styles –i.e. patterns of accessing and using knowledge items, among which are concepts and words used to name them.

Relatively unstable factors, such as (i) psychological states like nervousness, concentration, absent-mindedness, emotional override, or simply experiencing feelings like happiness, sadness, depression, etc.; (ii) physiological conditions like tiredness, drowsiness, drunkenness, etc., or (iii) performance of actions necessary for physiological functions like swallowing, sipping, sneezing, etc. These may facilitate or hinder access to and usage of knowledge items including concepts and words.

External –i.e. situation-related– factors, which encompass (i) the spatio-temporal circumstances where encounters take place, and (ii) the social relations with other participants in an encounter. For instance, haste, urgency or (un)familiarity with a setting may ease or impede access to and usage of knowledge items, as may experiencing social distance and/or more or less power with respect to another individual (Brown and Levinson 1987).

While ‘social distance’ refers to (un)acquaintance with other people and (dis)similarity with them as a result of perceptions of membership to a social group, ‘power’ does not simply allude to the possibility of imposing upon others and conditioning their behaviour as a consequence of differing positions in a particular hierarchy within a specific social institution. ‘Power’ also refers to the likelihood to impose upon other people owing to perceived or supposed expertise in a field –i.e. expert power, like that exerted by, for instance, a professor over students– or to admiration of diverse personal attributes –i.e. referent power, like that exerted by, for example, a pop idol over fans (Spencer-Oatey 1996).

There Must Be Some Misunderstanding

Conceptualising capacities, conceptual inventories and lexical competence also partake of the four features listed above: gradualness, comparativeness, contrastiveness and context-dependence. Needless to say, all three of them obviously increase as a consequence of growth and exposure to or participation in a plethora of situations and events, among which education or training are fundamental. Conceptualising capacities and lexical competence may be more or less developed or accurate than other abilities, among which are the other sub-competences upon which communicative competence depends –i.e. phonetics, morphology, syntax and pragmatics (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995).

Additionally, conceptual inventories enabling lexical performance may be rather complex in some domains but not in others –e.g. a person may store many concepts and possess a rich vocabulary pertaining to, for instance, linguistics, but lack or have rudimentary ones about sports. Finally, lexical competence may appear to be higher or lower than that of other individuals under specific spatio-temporal and social circumstances, or because of the influence of the aforesaid psychological and physiological factors, or actions performed while speaking.

Apparent knowledge and usage of general or domain-specific vocabulary may be assessed and compared to those of other people, but performance may be hindered or fail to meet expectations because of the aforementioned factors. If it was considered deficient, inferior or lower than that of other individuals, such consideration should only concern knowledge and usage of vocabulary concerning a specific domain, and be only relative to a particular moment, maybe under specific circumstances.

Unfortunately, people often extrapolate and (over)generalise, so they may take (seeming) lexical gaps at a particular time in a speaker’s life or one-off, occasional or momentary lexical infelicities to suggest or unveil more global and overarching conceptualising handicaps or lexical deficits. This does not only lead people to doubt the richness and broadness of that speaker’s conceptual inventory and lexical repertoire, but also to question her conceptualising abilities and what may be labelled her conceptual accuracy –i.e. the capacity to create concepts that adequately capture nuances in elements of reality and facilitate correct reference to those elements– as well as her lexical efficiency or lexical reliability –i.e. the ability to use vocabulary appropriately.

As long as doubts are cast about the amount and accuracy of the concepts available to a speaker and her ability to verbalise them, there arises an unwarranted and unfair wronging which would count as an injustice about that speaker’s conceptualising skills, amount of concepts and expressive abilities. The loosened notion of conceptual competence injustice whose incorporation into the field of linguistic pragmatics I advocated does not necessarily presuppose a previous discrimination or prejudice negatively biasing hegemonic, privileged or empowered individuals against minorities or identities.

Wrong is done, and an epistemic injustice is therefore inflicted, because another person’s conceptual inventory, lexical repertoire and expressive skills are underestimated or negatively evaluated because of (i) perception of a communicative behaviour that is felt not to meet expectations or to be below alleged standards, (ii) tenacious adherence to those expectations or standards, and (iii) unawareness of the likely influence of various factors on performance. This wronging may nonetheless lead to subsequently downgrading that person as regards her communicative competence, discrediting her conceptual accuracy and lexical efficiency/reliability, and denigrating her as a speaker of a language, and, therefore, as an epistemic agent. Relying on all this, further discrimination on other grounds may ensue or an already existing one may be strengthened and perpetuated.

Relevance Theory and Conceptual Competence Injustice

Initially put forth in 1986, and slightly refined almost ten years later, relevance theory is a pragmatic framework that aims to explain (i) why hearers select particular interpretations out of the various possible ones that utterances may have –all of which are compatible with the linguistically encoded and communicated information– (ii) how hearers process utterances, and (iii) how and why utterances and discourse give rise to a plethora of effects (Sperber and Wilson 1986/1995). Accordingly, it concentrates on the cognitive side of communication: comprehension and the mental processes intervening in it.

Relevance theory (Sperber and Wilson 1986/1995) reacted against the so-called code model of communication, which was deeply entrenched in western linguistics. According to this model, communication merely consists of encoding thoughts or messages into utterances, and decoding these in order to arrive at speaker meaning. Since speakers cannot encode everything they intend to communicate and absolute explicitness is practically unattainable, relevance theory portrays communication as an ostensive-inferential process where speakers draw the audience’s attention by means of intentional stimuli. On some occasions these amount to direct evidence –i.e. showing– of what speakers mean, so their processing requires inference; on other occasions, intentional stimuli amount to indirect –i.e. encoded– evidence of speaker meaning, so their processing relies on decoding.

However, in most cases the stimuli produced in communication combine direct with indirect evidence, so their processing depends on both inference and decoding (Sperber and Wilson 2015). Intentional stimuli make manifest speakers’ informative intention –i.e. the intention that the audience create a mental representation of the intended message, or, in other words, a plausible interpretative hypothesis– and their communicative intention –i.e. the intention that the audience recognise that speakers do have a particular informative intention. The role of hearers, then, is to arrive at speaker meaning by means of both decoding and inference (but see below).

Relevance theory also reacted against philosopher Herbert P. Grice’s (1975) view of communication as a joint endeavour where interlocutors identify a common purpose and may abide by, disobey or flout a series of maxims pertaining to communicative behaviour –those of quantity, quality, relation and manner– which articulate the so-called cooperative principle. Although Sperber and Wilson (1986/1995) seriously question the existence of such principle, they nevertheless rest squarely on a notion already present in Grice’s work, but which he unfortunately left undefined: relevance. This becomes the corner stone in their framework. Relevance is claimed to be a property of intentional stimuli and characterised on the basis of two factors:

Cognitive effects, or the gains resulting from the processing of utterances: (i) strengthening of old information, (ii) contradiction and rejection of old information, and (iii) derivation of new information.

Cognitive or processing effort, which is the effort of memory to select or construct a suitable mental context for processing utterances and to carry out a series of simultaneous tasks that involve the operation of a number of mental mechanisms or modules: (i) the language module, which decodes and parses utterances; (ii) the inferential module, which relates information encoded and made manifest by utterances to already stored information; (iii) the emotion-reading module, which identifies emotional states; (iv) the mindreading module, which attributes mental states, and (v) vigilance mechanisms, which assess the reliability of informers and the believability of information (Sperber and Wilson 1986/1995; Wilson and Sperber 2004; Sperber et al. 2010).

Relevance is a scalar property that is directly proportionate to the amount of cognitive effects that an interpretation gives rise to, but inversely proportionate to the expenditure of cognitive effort required. Interpretations are relevant if they yield cognitive effects in return for the cognitive effort invested. Optimal relevance emerges when the effect-effort balance is satisfactory. If an interpretation is found to be optimally relevant, it is chosen by the hearer and thought to be the intended interpretation. Hence, optimal relevance is the property determining the selection of interpretations.

The Power of Relevance Theory

Sperber and Wilson’s (1986/1995) ideas and claims originated a whole branch in cognitive pragmatics that is now known as relevance-theoretic pragmatics. After years of intense, illuminating and fruitful work, relevance theorists have offered a plausible model for comprehension. In it, interpretative hypotheses –i.e. likely interpretations– are said to be formulated during a process of mutual parallel adjustment of the explicit and implicit content of utterances, where the said modules and mechanisms perform a series of simultaneous, incredibly fast tasks at a subconscious level (Carston 2002; Wilson and Sperber 2004).

Decoding only yields a minimally parsed chunk of concepts that is not yet fully propositional, so it cannot be truth-evaluable: the logical form. This form needs pragmatic or contextual enrichment by means of additional tasks wherein the inferential module relies on contextual information and is sometimes constrained by the procedural meaning –i.e. processing instructions– encoded by some linguistic elements.

Those tasks include (i) disambiguation of syntactic constituents; (ii) assignment of reference to words like personal pronouns, proper names, deictics, etc.; (iii) adjustment of the conceptual content encoded by words like nouns, verbs, adjectives or adverbs, and (iv) recovery of unarticulated constituents. Completion of these tasks results in the lower-level explicature of an utterance, which is a truth-evaluable propositional form amounting to the explicit content of an utterance. Construction of lower-level explicatures depends on decoding and inference, so that the more decoding involved, the more explicit or strong these explicatures are and, conversely, the more inference needed, the less explicit and weaker these explicatures are (Wilson and Sperber 2004).

A lower-level explicature may further be embedded into a conceptual schema that captures the speaker’s attitude(s) towards the proposition expressed, her emotion(s) or feeling(s) when saying what she says, or the action that she intends or expects the hearer to perform by saying what she says. This schema is the higher-level explicature and is also part of the explicit content of an utterance.

It is sometimes built through decoding some of the elements in an utterance –e.g. attitudinal adverbs like ‘happily’ or ‘unfortunately’ (Ifantidou 1992) or performative verbs like ‘order’, ‘apologise’ or ‘thank’ (Austin 1962)– and other times through inference, emotion-reading and mindreading –as in the case of, for instance, interjections, intonation or paralanguage (Wilson and Wharton 2006; Wharton 2009, 2016) or indirect speech acts (Searle 1969; Grice 1975). As in the case of lower-level explicatures, higher-level ones may also be strong or weak depending on the amount of decoding, emotion-reading and mindreading involved in their construction.

The explicit content of utterances may additionally be related to information stored in the mind or perceptible from the environment. Those information items act as implicated premises in inferential processes. If the hearer has enough evidence that the speaker intended or expected him to resort to and use those premises in inference, they are strong, but, if he does so at his own risk and responsibility, they are weak. Interaction of the explicit content with implicated premises yields implicated conclusions. Altogether, implicated premises and implicated conclusions make up the implicit content of an utterance. Arriving at the implicit content completes mutual parallel adjustment, which is a process constantly driven by expectations of relevance, in which the more plausible, less effort-demanding and more effect-yielding possibilities are normally chosen.

The Limits of Relevance Theory

As a model centred on comprehension and interpretation of ostensive stimuli, relevance theory (Sperber and Wilson 1986/1995) does not need to be able to identify instances of conceptual competence injustice, as Anderson (2017b) remarks, nor even instances of the negative consequences of communicative behaviour that may be alluded to by means of the broader, loosened notion of conceptual competence injustice I argued for. Rather, as a cognitive framework, its role is to explain why and how these originate. And, certainly, its notional apparatus and the cognitive machinery intervening in comprehension which it describes can satisfactorily account for (i) the ontology of unwarranted judgements of lexical and conceptual (in)competence, (ii) their origin and (iii) some of the reasons why they are made.

Accordingly, those judgements (i) are implicated conclusions which (ii) are derived during mutual parallel adjustment as a result of (iii) accessing some manifest assumptions and using these as implicated premises in inference. Obviously, the implicated premises that yield the negative conclusions about (in)competence might not have been intended by the speaker, who would not be interested in the hearer accessing and using them. However, her communicative performance makes manifest assumptions alluding to her lexical lacunae and mistakes and these lead the hearer to draw undesired conclusions.

Relevance theory (Sperber and Wilson 1986/1995) is powerful enough to offer a cognitive explanation of the said three issues. And this alone was what I aimed to show in my comment to Anderson’s (2017a) work. Two different issues, nevertheless, are (i) the reasons why certain prejudicial assumptions become manifest to an audience and (ii) why those assumptions end up being distributed across the members of certain wide social groups.

As Anderson (2017b) underlines, conceptual competence injustices must necessarily be contextualised in situations where privileged and empowered social groups are negatively-biased or prejudiced against other identities and create patterns of marginalisation. Prejudice may be argued to bring to the fore a variety of negative assumptions about the members of the identities against whom it is held. Using Giora’s (1997) terminology, prejudice makes certain detrimental assumptions very salient or increases the saliency of those assumptions.

Consequently, they are amenable to being promptly accessed and effortlessly used as implicated premises in deductions, from which negative conclusions are straightforwardly and effortlessly derived. Those premises and conclusions spread throughout the members of the prejudiced and hegemonic group because, according to Sperber’s (1996) epidemiological model of culture, they are repeatedly transmitted or made public. This is possible thanks to two types of factors (Sperber 1996: 84):

Psychological factors, such as their relative easiness of storage, the existence of other knowledge with which they can interact in order to generate cognitive effects –e.g. additional negative conclusions pertaining to the members of the marginalised identity– or existence of compelling reasons to make the individuals in the group willing to transmit them –e.g. desire to disempower and/or marginalise the members of an unprivileged group, to exclude them from certain domains of human activity, to secure a privileged position, etc.

Ecological factors, such as the repetition of the circumstances under which those premises and conclusions result in certain actions –e.g. denigration, disempowerment, maginalisation, exclusion, etc.– availability of storage mechanisms other than the mind –e.g. written documents– or the existence of institutions that transmit and perpetuate those premises and conclusions, thus ensuring their continuity and availability.

Since the members of the dominating biased group find those premises and conclusions useful to their purposes and interests, they constantly reproduce them and, so to say, pass them on to the other members of the group or even on to individuals who do not belong to it. Using Sperber’s (1996) metaphor, repeated production and internalisation of those representations resembles the contagion of illnesses. As a result, those representations end up being part of the pool of cultural representations shared by the members of the group in question or other individuals.

The Imperative to Get Competence Correct

In social groups with an interest in denigrating and marginalising an identity, certain assumptions regarding the lexical inventories and conceptualising abilities of the epistemic agents with that identity may be very salient, or purposefully made very salient, with a view to ensuring that they are inferentially exploited as implicated premises that easily yield negative conclusions. In the case of average speakers’ lexical gaps and mistakes, assumptions concerning their performance and infelicities may also become very salient, be fed into inferential processes and result in prejudicial conclusions about their lexical and conceptual (in)competence.

Although utterance comprehension and information processing end upon completion of mutual parallel adjustment, for the informational load of utterances and the conclusions derivable from them to be added to an individual’s universe of beliefs, information must pass the filters of a series of mental mechanisms that target both informers and information itself, and check their believability and reliability. These mechanisms scrutinise various sources determining trust allocation, such as signs indicating certainty and trustworthiness –e.g. gestures, hesitation, nervousness, rephrasing, stuttering, eye contact, gaze direction, etc.– the appropriateness, coherence and relevance of the dispensed information; (previous) assumptions about speakers’ expertise or authoritativeness in some domain; the socially distributed reputation of informers, and emotions, prejudices and biases (Origgi 2013: 227-233).

As a result, these mechanisms trigger a cautious and sceptic attitude known as epistemic vigilance, which in some cases enables individuals to avoid blind gullibility and deception (Sperber et al. 2010). In addition, these mechanisms monitor the correctness and adequateness of the interpretative steps taken and the inferential routes followed while processing utterances and information, and check for possible flaws at any of the tasks in mutual parallel adjustment –e.g. wrong assignment of reference, supply of erroneous implicated premises, etc.– which would prevent individuals from arriving at actually intended interpretations. Consequently, another cautious and sceptical attitude is triggered towards interpretations, which may be labelled hermeneutical vigilance (Padilla Cruz 2016).

If individuals do not perceive risks of malevolence or deception, or do not sense that they might have made interpretative mistakes, vigilance mechanisms are weakly or moderately activated (Michaelian 2013: 46; Sperber 2013: 64). However, their level of activation may be raised so that individuals exercise external and/or internal vigilance. While the former facilitates higher awareness of external factors determining trust allocation –e.g. cultural norms, contextual information, biases, prejudices, etc.– the latter facilitates distancing from conclusions drawn at a particular moment, backtracking with a view to tracing their origin –i.e. the interpretative steps taken, the assumptions fed into inference and assessment of their potential consequences (Origgi 2013: 224-227).

Exercising weak or moderate vigilance of the conclusions drawn upon perception of lexical lacunae or mistakes may account for their unfairness and the subsequent wronging of individuals as regards their actual conceptual and lexical competence. Unawareness of the internal and external factors that may momentarily have hindered competence and ensuing performance, may cause perceivers of lexical gaps and errors to unquestioningly trust assumptions that their interlocutors’ allegedly poor performance makes manifest, rely on them, supply them as implicated premises, derive conclusions that do not do any justice to their actual level of conceptual and lexical competence, and eventually trust their appropriateness, adequacy or accuracy.

A higher alertness to the potential influence of those factors on performance would block access to the detrimental assumptions made manifest by their interlocutors’ performance or make perceivers of lexical infelicities reconsider the convenience of using those assumptions in deductions. If this was actually the case, perceivers would be deploying the processing strategy labelled cautious optimism, which enables them to question the suitability of certain deductions and to make alternative ones (Sperber 1994).

Conclusion

Relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004) does not need to be able to identify cases of conceptual competence injustice, but its notional apparatus and the machinery that it describes can satisfactorily account for the cognitive processes whereby conceptual competence injustices originate. In essence, prejudice and interests in denigrating members of specific identities or minorities favour the saliency of certain assumptions about their incompetence, which, for a variety of psychological and ecological reasons, may already be part of the cultural knowledge of the members of prejudiced empowered groups. Those assumptions are subsequently supplied as implicated premises to deductions, which yield conclusions that undermine the reputation of the members of the identities or minorities in question. Ultimately, such conclusions may in turn be added to the cultural knowledge of the members of the biased hegemonic group.

The same process would apply to those cases wherein hearers unfairly wrong their interlocutors on the grounds of performance below alleged or expected standards, and are not vigilant enough of the factors that could have impeded it. That wronging may be alluded to by means of a somewhat loosened, broadened notion of ‘conceptual competence injustice’ which deprives it of one of its quintessential conditions: the existence of prejudice and interests in marginalising other individuals. Inasmuch as apparently poor performance may give rise to unfortunate unfair judgements of speakers’ overall level of competence, those judgements could count as injustices. In a nutshell, this was the reason why I advocated for the incorporation of a ‘decaffeinated’ version of Anderson’s (2017a) notion into the field of linguistic pragmatics.

Contact details: mpadillacruz@us.es

References

Anderson, Derek. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017a): 210-223.

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017b): 34-39.

Austin, John L. How to Do Things with Words. Oxford: Clarendon Press, 1962.

Bachman, Lyle F. Fundamental Considerations in Language Testing. Oxford: Oxford University Press, 1990.

Brown, Penelope, and Stephen C. Levinson. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press, 1987.

Canale, Michael. “From Communicative Competence to Communicative Language Pedagogy.” In Language and Communication, edited by Jack C. Richards and Richard W. Schmidt, 2-28. London: Longman, 1983.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Celce-Murcia, Marianne, Zoltán Dörnyei, and Sarah Thurrell. “Communicative Competence: A Pedagogically Motivated Model with Content Modifications.” Issues in Applied Linguistics 5 (1995): 5-35.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Giora, Rachel. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics 8 (1997): 183-206.

Grice, Herbert P., 1975. “Logic and Conversation.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry L. Morgan, 41-58. New York: Academic Press, 1975.

Hymes, Dell H. “On Communicative Competence.” In Sociolinguistics. Selected Readings, edited by John B. Pride and Janet Holmes, 269-293. Baltimore: Penguin Books, 1972.

Ifantidou, Elly. “Sentential Adverbs and Relevance.” UCL Working Papers in Linguistics 4 (1992): 193-214.

Jary, Mark. “Relevance Theory and the Communication of Politeness.” Journal of Pragmatics 30 (1998): 1-19.

Jary, Mark. “Two Types of Implicature: Material and Behavioural.” Mind & Language 28, no. 5 (2013): 638-660.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary.” Social Epistemology: A Journal of Knowledge, Culture and Policy 25, no. 1 (2011): 15-35.

Michaelian, Kourken. “The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication.” Episteme 10, no. 1 (2013): 37-59.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Origgi, Gloria. “Epistemic Injustice and Epistemic Trust.” Social Epistemology: A Journal of Knowledge, Culture and Policy 26, no. 2 (2013): 221-235.

Padilla Cruz, Manuel. “Vigilance Mechanisms in Interpretation: Hermeneutical Vigilance.” Studia Linguistica Universitatis Iagellonicae Cracoviensis 133, no. 1 (2016): 21-29.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Interlocutors-related and Hearer-specific Causes of Misunderstanding: Processing Strategy, Confirmation Bias and Weak Vigilance.” Research in Language 15, no. 1 (2017b): 11-36.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Spencer-Oatey, Hellen D. “Reconsidering Power and Distance.” Journal of Pragmatics 26 (1996): 1-24.

Sperber, Dan. “Understanding Verbal Understanding.” In What Is Intelligence? edited by Jean Khalfa, 179-198. Cambridge: Cambridge University Press, 1994.

Sperber, Dan. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell, 1996.

Sperber, Dan. “Speakers Are Honest because Hearers Are Vigilant. Reply to Kourken Michaelian.” Episteme 10, no. 1 (2013): 61-71.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “Beyond Speaker’s Meaning.” Croatian Journal of Philosophy 15, no. 44 (2015): 117-149.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wharton, Tim. Pragmatics and Non-verbal Communication. Cambridge: Cambridge University Press, 2009.

Wharton, Tim. “That Bloody so-and-so Has Retired: Expressives Revisited.” Lingua 175-176 (2016): 20-35.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn, and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Wilson, Deirdre and Tim Wharton. “Relevance and Prosody.” Journal of Pragmatics 38 (2006): 1559-1579.

[1] Following a relevance-theoretic convention, reference to the speaker will be made through the feminine third person singular personal pronoun, while reference to the hearer will be made through its masculine counterpart.

Author Information: Raimo Tuomela, University of Helsinki, raimo.tuomela@helsinki.fi

Tuomela, Raimo. “The Limits of Groups: An Author Replies.” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 28-33.

The pdf of the article refers to specific page numbers. Shortlink: https://wp.me/p1Bfg0-3QM

Please refer to:

In their critique Corlett and Strobel (2017) discuss my 2013 book Social Ontology and comment on some of my views. In this reply I will respond to their three central criticisms that I here formulate as follows:[1]

(1) Group members are said in my account to be required to ask for the group’s, thus other members’, permission to leave the group, and this seems to go against the personal moral autonomy of the members.

(2) My account does not focus on morally central matters such as personal autonomy, although it should.

(3) My moral notions are based on a utilitarian view of morality.

In this note I will show that claims (1) – (3) are not (properly) justified on closer scrutiny.

Unity Is What’s Missing In Our Lives

Below I will mostly focus on we-mode groups, that is, groups based on we-thinking, we-reasoning, a shared “ethos”, and consequent action as a unified group.[2] Ideally, such we-mode groups are autonomous (externally uncoerced) and hence free to decide about the ethos (viz. the central goals, beliefs, norms, etc.) of their group and to select its position holders in the case of an organized group. Inside the group (one with freely entered members) each member is supposed to be “socially” committed to the others to perform her part of the joint enterprise. (Intentions in general involve commitment to carry out what is intended).

The members of a we-mode group should be able to count on each other not to be let down. The goal of the joint activity typically will not be reached without the other members’ successful part performances (often involving helping). When one enters a we-mode group it is one’s own choice, but if the others cannot be trusted the whole project may be impossible to carry out (think of people building a bridge in their village).

The authors claim that my moral views are based on utilitarianism and hence some kind of maximization of group welfare instead of emphasizing individual autonomy and the moral rights of individuals.[3] This is a complex matter and I will here say only that there is room in my theory both for group autonomy and individual autonomy.  The we-mode account states what it takes for people to act in the we-mode (see Tuomela, 2013, ch. 2). According to my account, the members have given up part of their individual autonomy to the group. From this follows that solidarity to the other members is important. The members of a paradigmatic we-mode group should not let the others down. This is seen as a moral matter.

The Moral Nature of the Act

As to the moral implications of the present approach, when a group is acting intentionally it is as a rule responsible for what it does. But what can be said about the responsibility of a member? Basically, each member is responsible as a group member and also privately morally responsible for the performance of his part. (He could have left the group or expressed his divergent opinion and reasons.) Here we are discussing the properly moral and not only the instrumental or quasi-moral implications of group action and the members.[4]

A member’s exiting a free (autonomous) group is in some cases a matter for the group to deal with. “What sanctions does a group need for quitting members if it endangers the whole endeavor?” Of course the members may exit the group but then they have to be prepared to suffer the (possibly) agreed-upon sanctions for quitting. Corlett and Strobel focus on the requirement of a permission to leave the group (see pp. 43-44 of Tuomela, 2013). It is up to the group to decide about suitable sanctions. E.g. the members may be expected to follow the majority here. (See ch. 5 of Tuomela, 2013).

Furthermore, those who join the group should of course be clear about what kind of group they are joining. If they later on wish to give up their membership they can leave upon taking on the sanctions, if any, that the group has decided upon. My critics rightfully wonder about the expression “permission to leave the group”. My formulations seem to have misleadingly suggested to them that the members are (possibly) trapped in the we-mode group. Note that on p. 44 of my 2013 book I speak of cases where leaving the group harms the other members and propose that sometimes rather mere informing the members might be appropriate.

How can “permission from the group” be best understood? Depending on the case at hand, it might involve asking the individual members if they allow the person in question to leave without sanctions. But this sounds rather silly especially in the case of large groups. Rather, the group may formulate procedures for leaving the group. This would involve institutionalizing the matter and the possible sanctioning system. In the case of paradigmatic autonomous we-mode groups the exit generally is free in the sense that the group itself rather than an external authority decides about procedures for exiting the group (see appendix 1 to chapter 2 of Tuomela, 2013). However, those leaving the group might have to face group-based sanctions if they by their leaving considerably harm the others.

In my account the members of a well-functioning we-mode group can be said somewhat figuratively to have given up part of their autonomy and self-determination to their we-mode group. Solidarity between the members is important: The members should not let the others down – or else the group’s project (viz. the members’ joint project) will not be successful. This is a non-utilitarian moral matter – the members are to keep together not to let each other down. Also for practical reasons it is desirable that the members stick together on penalty of not achieving their joint goal – e.g. building a bridge in their village.

People do retain their personal (moral) autonomy in the above kind of cases where entering and exiting a we-mode group is free (especially free from external authorities) or where, in some cases, the members have to satisfy special conditions accepted by their group. I have suggested elsewhere that dissenting members should either leave the group or try to change the ethos of the group. As said above, in specific cases of ethos-related matters the members may use a voting method, e.g. majority voting, even if the minority may want to challenge the result.[5]

Questions of Freedom

According to Corlett and Strobel, freedom of expression is largely blocked and the notion of individual autonomy is dubious in my account (see p. 9 of their critical paper). As was pointed out above, the members may leave the group freely or via an agreed-upon procedure. Individual autonomy is thwarted to the extent that is needed for performing one’s part, but such performance is the whole point of participation in the first place. Of course the ethos may be discussed along the way and changes may be introduced if the members or e.g. the majority of them or another “suitable” number of them agree. The members enter the group freely, by their own will and through the group’s entrance procedures and may likewise leave the group through collectively agreed-on procedures (if such exist).

As we know, autonomy is a concept much used in everyday life, outside moral philosophy. In my account it is used in “autonomous groups”, in the simple sense that the group can make its own decisions about ethos, division of tasks, conditions for entering and exiting the group without coercion by an external authority. Basically, only the autonomous we-mode group can, through its members’ decision, make rules for how people are allowed to join or leave the group.[6]

Corlett’s and Strobel’s critique that the members in autonomous we-mode groups have no autonomy (in the moral sense) in my account cannot be directed towards the paradigmatic case of groups with free entrance, where the group members decide among themselves what is to be done by whom and how to arrange for the situation of a member wanting to leave the group, maybe in the middle of a critical situation. Of course, a member cannot always do as he chooses in situations of group action. A joint goal is at stake and one’s letting the others down when they have a good reason to count on one would be detrimental to everyone’s goal achievement. Also, letting the others down is at least socially and morally condemnable.

When people have good reason to drop out, having changed their mind or finding that the joint project is morally dubious, they can exit according to the relevant rules (if such exist in the group). The feature criticized by the present authors that “others’ permission is required” is due to my unlucky formulation. What is meant is that in some cases there should be some kind of procedure in the group for leaving. The group members are socially committed to each other to further the ethos, as well as committed to the ethos. The social commitment has, of course, the effect that each member looks to the others for cooperative actions and attitudes and has a good reason to do so.

My critics suggest that the members should seek support from the others – indeed this seems to be what the assumed solidarity of we-mode groups can be taken to provide. However, what they mean could be a procedure to make the ethos more attractive to them and leading to their renewed support of the ethos, instead of pressuring them to stay in a group with an ethos that no longer interests them. Of course, the ethos may be presented in new ways, but there still may be situations where members want to leave and they have a right to leave following the agreed upon procedures. Informing the group in due time, so that the group can find compensating measures, is what a member who quits can and should minimally do. The authors discuss examples where heads of states and corporations want to resign. It is typically possible to resign according e.g.to the group’s exit rules, if such exist.

Follow the Leader

On page 11 the authors criticize the we-mode account for the fact that non-operative members ought to accept what the operative leaders decide. They claim that e.g. a state like the U.S., on the contrary, allows, and in some situations, even asks the citizens to protest. They are, of course, right in their claims concerning special cases. Naturally there will sometimes be situations where protest is called for. The dissidents may then win and the government (or what have you) will change its course of action. Even the ethos of the group may sometimes have to be reformulated.

Gradual development occurs also in social groups and organizations, the ethos evolves often through dissident actions. When the authorized operatives act according to what they deem to be a feasible way, they do what they are chosen to do. If non-operatives protest due to immoral actions of the operatives, they do the right thing morally, but if the operatives act according to the ethos, they are doing their job, although they should have chosen a moral way to achieve the goal. The protest of the non-operatives may have an effect. On the other hand, note that even Mafia groups may act in the we-mode and do so in immoral ways, in accordance to their own agenda.

The authors discuss yet another kind of example of exiting the group, where asking permission would seem out of place: a marriage. If a married couple is taken to be a we-mode group, the parties would have to agree upon exit conditions (if marriage is not an institutionalized and codified concept – what it, nevertheless, usually is). As an institution it is regulated in various ways depending on the culture. The summarized critique by the authors on page 12 has been met this far. It seems that they have been fixated on the formulation that “members cannot leave the group without the permission from the other members.” To be sure, my view is that group members cannot just walk out on the others without taking any measures to ease the detrimental effects of their defection. Whether it is permission, compensation or an excuse, depends on the case. In protesting we have a different story: Dissidents often have good reasons to protest, and sometimes they just want to change the ethos instead of leaving.

It’s Your Prerogative

At the end of their critique the authors suggest that I should include in my account a moral prerogative for the members to seek the support of other group members as a courtesy to other members and the group. I have no objection to that. Once more, the expression “permission to leave the group” has been an unfortunate choice of words. It would have been better e.g. to speak of a member’s being required to inform the others that one has to quit and be ready to suffer possible sanctions for letting the others down and perhaps causing the whole project to collapse.

However, dissidents should have the right to protest. Those who volunteer to join a group with a specific ethos cannot always foresee if the ethos allows for immoral or otherwise unacceptable courses of action. Finally, my phrase “free entrance and exit” may have been misunderstood. As pointed out, the expression refers to the right of the members to enter and exit instead of being forced to join a group and remain there. To emphasize once more, it is in this way that the members of we-mode groups are autonomous.  Also, there is no dictator who steers the ethos formation and choice of position holders. However, although the members may jointly arrange their group life freely, each member is not free to do whatever he chooses when he acts in the we-mode. We-mode acting involves solidary collective acting by the members according to the ethos of the group.

In this note I have responded to the main criticisms (1)-(3) by Corlett and Strobel (2017) and argued that they do not damage my theory at least in a serious way. I wish to thank my critics for their thoughtful critical points.

Contact details: raimo.tuomela@helsinki.fi

References

Corlett, A. and Strobel J., “Raimo Tuomela’s Social Ontology”, Social Epistemology 31, no. 6.  (2017): 1-15

Schmid, H.-B. “On not doing one’s part.” Pp. 287-306, in Psarros, N., Schule-Ostermann, K. (eds.) Facets of Sociality. Frankfurt: Ontos Verlag, 2007

Tuomela, R. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford: Stanford University Press, 1995.

Tuomela, R. The Philosophy of Sociality, Oxford: Oxford University Press, 2007.

Tuomela, R. Social Ontology, New York: Oxford University Press, 2013.

Tuomela, R. and Mäkelä, P. “Group agents and their responsibility.” Journal of Ethics 20. (2016): 299-316

Tuomela, R. and Tuomela, M. “Acting As a Group Member and Collective Commitment”, Protosociology 18, (2003): 7-65.

[1] Acknowledgement. I wish to thank my wife Dr. Maj Tuomela for important help in writing this paper.

[2] See Tuomela (2007) and (2013) for the above notions.

[3] I speak of utilities only in game-theoretic contexts. (My moral views are closer to pragmatism and functionalism than utilitarianism.)

[4] See e.g. Tuomela-Mäkelä (2016) for a group’s  and group members’ moral responsibility, Also see pp. 37 and 41 of Tuomela (2013) and chapter 10 in Tuomela (2007).

[5] As to dissidents I have discussed the notion briefly in my 1995 book and in a paper published in 2003 with Maj Tuomela (see the references). Furthermore, Hans Bernhard Schmid discusses dissidents in we-mode groups in his article “On not doing one’s part” in Psarros and Schulte-Ostermann (eds.) Facets of Sociality, Ontos Verlag, 2007, pp. 287-306.

[6] Groups that are dependent on an external agent (e.g. a dictator, an owner of a company or an officer commanding an army unit) may lack the freedom to decide about what they should be doing, which positions they should have, and the members may be forced to join a group that they cannot exit from. My notion of “autonomous groups” refers to groups that are free to decide about their own matters, e.g. entrance and exit (possibly including sanctions). Personal moral autonomy in such groups is retained by the possibility to apply for entrance and exit upon taking on possible sanctions, influencing the ethos or protesting. The upshot is that a person functioning in a paradigmatic we-mode group should obey the possible restrictions that the group has set for exiting the group and be willing to suffer agreed upon sanctions. Such a we-mode group is assumed to have coercion-free entrance to the group and also free exit from it – as specified in Appendix 1 to Chapter 2 of my 2013 book. Here is meant that no external authority is coercing people to join and to remain in the group. A completely different matter is the case of a Mafia group and an army unit. The latter may be a unit that cannot be freely entered and exited. Even in these cases people may act in the we-mode. In some non-autonomous groups, like in a business company, the shareholders decide about all central matters and the workers get paid. Members may enter if they are chosen to join and exit only according to specific rules.

Author Information: Susann Wagenknecht, Aarhus University, su.wagen@ivs.au.dk

Wagenknecht, Susann. “Four Asymmetries Between Moral and Epistemic Trustworthiness.” Social Epistemology Review and Reply Collective 3, no. 6 (2014): 82-86.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1uJ

Please refer to:

‪Questions of how the epistemic and the moral, typically conceived of as non-epistemic, are intertwined in the creation and corroboration of scientific knowledge have spurred long-standing debates (see, e.g., the debate on epistemic and non-epistemic values of theory appraisal in Rudner 1953, Longino 1990 and Douglas 2000). To unravel the intricacies of epistemic and moral aspects of science, it seems, is a paradigmatic riddle in the Philosophy and Social Epistemology of Science. So, when philosophers discuss the character of trust and trustworthiness as a personal attribute in scientific practice, the moral-epistemic intricacies of trust are again fascinating the philosophical mind.  Continue Reading…