Archives For moral trust

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink: https://wp.me/p1Bfg0-41k

Image by Rigoberto Garcia via Flickr / Creative Commons

 

In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink: https://wp.me/p1Bfg0-41m

Image by Walt Stoneburner via Flickr / Creative Commons

 

If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “Imagining a Different Political Economy.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 7-11.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40v

Image by Rachel Adams via Flickr / Creative Commons

 

One cannot ask for a kinder or more complimentary reviewer than Adam Riggio.[1] His main complaint about my book, The Quest for Prosperity, is that “Stylistically, the book suffers from a common issue for many new research books in the humanities and social sciences. Its argument loses some momentum as it approaches the conclusion, and ends up in a more modest, self-restrained place than its opening chapters promised.”

My opening examination of what I see as the misconceptions of some presuppositions used in political economy is a first, necessary step towards an examination of recent capitalist variants (that are heralded as the best prospects for future organization of market exchanges) and for a different approach tor political economy offered by the end of the book. Admittedly, my vision of a radically reframed political economy that exposes some taken for granted concepts, such as scarcity, human nature, competition, and growth is an ambitious task, and perhaps, as Riggio suggests, I should attempt a more detailed articulation of the economy in a sequel.

However, this book does examine alternative frameworks, discusses in some detail what I consider misguided attempts to skirt the moral concerns I emphasize so as to retain the basic capitalist framework, and suggests principles that ought to guide a reframed political economy, one more attentive to the moral principles of solidarity and cooperation, empathy towards fellow members of a community, and an mindful avoidance of grave inequalities that are not limited to financial measures. In this respect, the book delivers more than is suggested by Riggio.

On Questions of Character

Riggio also argues that my

templates for communitarian alternatives to the increasingly brutal culture of contemporary capitalism share an important common feature that is very dangerous for [my] project. They are each rooted in civic institutions, material social structures for education, and socialization. Contrary to how [I] spea[k] of these four inspirations, civil rights and civic institutions alone are not enough to build and sustain a community each member of whom holds a communitarian ethical philosophy and moral sense deep in her heart.

This, too, is true to some extent. Just because I may successfully convince you that you are working with misconceptions about human nature, scarcity, and growth, for example, you may still not modify your behavior. Likewise, just because I may offer brilliant exemplars for how “civil rights and civic institutions” should be organized and legally enshrined does not mean that every member of the community will abide by them and behave appropriately.

Mean-spirited or angry individuals might spoil life for the more friendly and self-controlled ones, and Riggio is correct to point out that “a communitarian ethical philosophy and moral sense deep in [one’s] heart” are insufficient for overcoming the brutality of capitalist greed. But focusing on this set of concerns (rather than offering a more efficient or digitally sophisticated platform for exchanges), Riggio would agree, could be good starting points, and might therefore encourage more detailed analyses of policies and regulation of unfettered capitalist practices.

I could shirk my responsibility here and plead for cover under the label of a philosopher who lacks the expertise of a good old-fashioned social scientist or policy wonk who can advise how best to implement my proposals. But I set myself up to engage political economy in all its manifold facets, and Riggio is correct when he points out that my “analysis of existing institutions and societies that foster communitarian moralities and ethics is detailed enough to show promise, but unfortunately so brief as to leave us without guidance or strategy to fulfill that promise.”

But, when critically engaging not only the latest gimmicks being proposed under the capitalist umbrella (e.g., the gig economy or shared economies) but also their claims about freedom and equal opportunity, I was concerned to debunk pretenses so as to be able to place my own ideas within an existing array of possibilities. In that sense, The Quest for Prosperity is, indeed, more critique than manual, an immanent critique that accounts for what is already being practiced so as to point out inevitable weaknesses. My proposal was offered in broad outlines in the hope of enlisting the likes of Riggio to contribute more details that, over time, would fulfill such promises in a process that can only be, in its enormity, collaborative.

The Strength of Values

Riggio closes his review by saying that I

offered communitarian approaches to morality and ethics as solutions to those challenges of injustice. I think his direction is very promising. But The Quest for Prosperity offers only a sign. If his next book is to fulfill the promise of this one, he must explore the possibilities opened up by the following questions. Can communitarian values overcome the allure of greed? What kind of social, political, and economic structures would we need to achieve that utopian goal?

To be clear, my approach is as much Communitarian as it is Institutionalist, Marxist and heterodox, Popperian and postmodern; I prefer the more traditional terms socialism and communism as alternatives to capitalism in general and to my previous, more sanguine appeal to the notion of “postcapitalism.”

Still, Riggio hones in on an important point: since I insist on theorizing in moral and social (rather than monetary) terms, and since my concern is with views of human nature and the conditions under which we can foster a community of people who exchange goods and services, it stands to reason that the book be assessed in an ethical framework as well, concerned to some degree with how best to foster personal integrity, mutual empathy, and care. The book is as much concerned with debunking the moral pretenses of capitalism (from individual freedom and equal opportunity to happiness and prosperity, understood here in its moral and not financial sense) as with the moral underpinnings (and the educational and social institutions that foster them) of political economy.

In this sense, my book strives to be in line with Adam Smith’s (or even Marx’s) moral philosophy as much as with his political economy. The ongoing slippage from the moral to the political and economic is unavoidable: in such a register the very heart of my argument contends that financial strategies have to consider human costs and that economic policies affect humans as moral agents. But, to remedy social injustice we must deal with political economy, and therefore my book moves from the moral to the economic, from the social to the political.

Questions of Desire

I will respond to Riggio’s two concluding questions directly. The first deals with overcoming the allure of greed: in my view, this allure, as real and pressing as it is, remains socially conditioned, though perhaps linked to unconscious desires in the Freudian sense. Within the capitalist context, there is something more psychologically and morally complex at work that should be exposed (Smith and Marx, in their different analyses, appreciate this dimension of market exchanges and the framing of human needs and wants; later critics, as diverse as Herbert Marcuse and Karl Polanyi, continue along this path).

Wanting more of something—Father’s approval? Mother’s nourishment?—is different from wanting more material possessions or money (even though, in good a capitalist modality, the one seeps into the other or the one is offered as a substitute for the other). I would venture to say that a child’s desire for candy, for example, (candy being an object of desire that is dispensed or withheld by parents) can be quickly satiated when enough is available—hence my long discussion in the book about (the fictions of) scarcity and (the realities of) abundance; the candy can stand for love in general or for food that satisfies hunger, although it is, in fact, neither; and of course the candy can be substituted by other objects of desire that can or cannot be satisfied. (Candy, of course, doesn’t have the socially symbolic value that luxury items, such as iPhone, do for those already socialized.)

Only within a capitalist framework might one accumulate candy not merely to satisfy a sweet tooth or wish for a treat but also as a means to leverage later exchanges with others. This, I suggest, is learned behavior, not “natural” in the classical capitalist sense of the term. The reason for this lengthy explanation is that Riggio is spot on to ask about the allure of greed (given his mention of demand-side markets), because for many defenders of the faith, capitalism is nothing but a large-scale apparatus that satisfies natural human appetites (even though some of them are manufactured).

My arguments in the book are meant not only to undermine such claims but to differentiate between human activities, such as exchange and division of labor (historically found in families and tribes), and competition, greed, accumulation, and concentration of wealth that are specific to capitalism (and the social contract within which it finds psychological and legal protection). One can see, then, why I believe the allure of greed can be overcome through social conditioning and the reframing of human exchanges that satisfy needs and question wants.

Riggio’s concern over abuse of power, regardless of all the corrective structures proposed in the book, deserves one more response. Indeed, laws without enforcement are toothless. But, as I argue throughout the book, policies that attempt to deal with important social issues must deal with the economic features of any structure. What makes the Institutionalist approach to political economy informative is not only the recognition that economic ideals take on different hues when implemented in different institutional contexts, but that economic activity and behavior are culturally conditioned.

Instead of worrying here about a sequel, I’d like to suggest that there is already excellent work being done in the areas of human and civil rights (e.g., Michelle Alexander’s The New Jim Crow (2010) and Matthew Desmond’s Evicted (2016) chronicle the problems of capitalism in different sectors of the economy) so that my own effort is an attempt to establish a set of (moral) values against which existing proposals can be assessed and upon which (economic) policy reform should be built. Highlighting the moral foundation of any economic system isn’t a substitute for paying close attention to the economic system that surrounds and perhaps undermines it; rather, economic realities test the limits of the applicability of and commitment to such foundation.

Contact details: rsassowe@uccs.edu

References

Riggio, Adam. “The True Shape of a Society of Friends.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 40-45.

Sassower, Raphael. The Quest for Prosperity. London, UK: Rowman & Littlefield, 2017.

[1] Special thanks to Dr. Denise Davis for her critical suggestions.

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Seungbae Park, Ulsan National Institute of Science and Technology, nature@unist.ac.kr

Park, Seungbae. “Philosophers and Scientists are Social Epistemic Agents.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 31-40.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yo

Please refer to:

The example is from the regime of Hosni Mubarak, but these were the best photos the Digital Editor could find in Creative Commons when he was uploading the piece.

The style of examples common to epistemology, whether social or not, are often innocuous, ordinary situation. But the most critical uses and misuses of knowledge and belief remain all-too-ordinary situations already. If scepticism about our powers to know and believe hold – or are at least held sufficiently – then the most desperate political prisoner has lost her last glimmer of hope. Truth.
Image by Hossam el-Hamalawy via Flickr / Creative Commons

 

In this paper, I reply to Markus Arnold’s comment and Amanda Bryant’s comment on my work “Can Kuhn’s Taxonomic Incommensurability be an Image of Science?” in Moti Mizrahi’s edited collection, The Kuhnian Image of Science: Time for a Decisive Transformation?.

Arnold argues that there is a gap between the editor’s expressed goal and the actual content of the book. Mizrahi states in the introduction that his book aims to increase “our understanding of science as a social, epistemic endeavor” (2018: 7). Arnold objects that it is “not obvious how the strong emphasis on discounting Kuhn’s incommensurability thesis in the first part of the book should lead to a better understanding of science as a social practice” (2018: 46). The first part of the volume includes my work. Admittedly, my work does not explicitly and directly state how it increases our understanding of science as a social enterprise.

Knowledge and Agreement

According to Arnold, an important meaning of incommensurability is “the decision after a long and futile debate to end any further communication as a waste of time since no agreement can be reached,” and it is this “meaning, describing a social phenomenon, which is very common in science” (Arnold, 2018: 46). Arnold has in mind Kuhn’s claim that a scientific revolution is completed not when opposing parties reach an agreement through rational argumentations but when the advocates of the old paradigm die of old age, which means that they do not give up on their paradigm until they die.

I previously argued that given that most recent past paradigms coincide with present paradigms, most present paradigms will also coincide with future paradigms, and hence “taxonomic incommensurability will rarely arise in the future, as it has rarely arisen in the recent past” (Park, 2018: 70). My argument entails that scientists’ decision to end further communications with their opponents has been and will be rare, i.e., such a social phenomenon has been and will be rare.

On my account, the opposite social phenomenon has been and will rather be very common, viz., scientists keep communicating with each other to reach an agreement. Thus, my previous contention about the frequency of scientific revolutions increases our understanding of science as a social enterprise.

Let me now turn to Bryant’s comment on my criticism against Thomas Kuhn’s philosophy of science. Kuhn (1962/1970, 172–173) draws an analogy between the development of science and the evolution of organisms. According to evolutionary theory, organisms do not evolve towards a goal. Similarly, Kuhn argues, science does not develop towards truths. The kinetic theory of heat, for example, is no closer to the truth than the caloric theory of heat is, just as we are no closer to some evolutionary goal than our ancestors were. He claims that this analogy is “very nearly perfect” (1962/1970, 172).

My objection (2018a: 64–66) was that it is self-defeating for Kuhn to use evolutionary theory to justify his philosophical claim about the development of science that present paradigms will be replaced by incommensurable future paradigms. His philosophical view entails that evolutionary theory will be superseded by an incommensurable alternative, and hence evolutionary theory is not trustworthy. Since his philosophical view relies on this untrustworthy theory, it is also untrustworthy, i.e., we ought to reject his philosophical view that present paradigms will be displaced by incommensurable future paradigms.

Bryant replies that “Kuhn could adopt the language of a paradigm (for the purposes of drawing an analogy, no less!) without committing to the literal truth of that paradigm” (2018: 3). On her account, Kuhn could have used the language of evolutionary theory without believing that evolutionary theory is true.

Can We Speak a Truth Without Having to Believe It True?

Bryant’s defense of Kuhn’s position is brilliant. Kuhn would have responded exactly as she has, if he had been exposed to my criticism above. In fact, it is a common view among many philosophers of science that we can adopt the language of a scientific theory without committing to the truth of it.

Bas van Fraassen, for example, states that “acceptance of a theory involves as belief only that it is empirically adequate” (1980: 12). He also states that if “the acceptance is at all strong, it is exhibited in the person’s assumption of the role of explainer” (1980: 12). These sentences indicate that according to van Fraassen, we can invoke a scientific theory for the purpose of explaining phenomena without committing to the truth of it. Rasmus Winther (2009: 376), Gregory Dawes (2013: 68), and Finnur Dellsén (2016: 11) agree with van Fraassen on this account.

I have been pondering this issue for the past several years. The more I reflect upon it, however, the more I am convinced that it is problematic to use the language of a scientific theory without committing to the truth of it. This thesis would be provocative and objectionable to many philosophers, especially to scientific antirealists. So I invite them to consider the following two thought experiments.

First, imagine that an atheist uses the language of Christianity without committing to the truth of it (Park, 2015: 227, 2017a: 60). He is a televangelist, saying on TV, “If you worship God, you’ll go to heaven.” He converts millions of TV viewers into Christianity. As a result, his church flourishes, and he makes millions of dollars a year. To his surprise, however, his followers discover that he is an atheist.

They request him to explain how he could speak as if he were a Christian when he is an atheist. He replies that he can use the language of Christianity without believing that it conveys truths, just as scientific antirealists can use the language of a scientific theory without believing that it conveys the truth.

Second, imagine that scientific realists, who believe that our best scientific theories are true, adopts Kuhn’s philosophical language without committing to Kuhn’s view of science. They say, as Kuhn does, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” Kuhn requests them to explain how they could speak as if they were Kuhnians when they are not Kuhnians. They reply that they can adopt his philosophical language without committing to his view of science, just as scientific antirealists can adopt the language of a scientific theory without committing to the truth of it.

The foregoing two thought experiments are intended to be reductio ad absurdum. That is, my reasoning is that if it is reasonable for scientific antirealists to speak the language of a scientific theory without committing to the truth of it, it should also be reasonable for the atheist to speak the language of Christianity and for scientific realists to speak Kuhn’s philosophical language. It is, however, unreasonable for them to do so.

Let me now diagnose the problems with the atheist’s speech acts and scientific realists’ speech acts. The atheist’s speech acts go contrary to his belief that God does not exist, and scientific realists’ speech acts go contrary to their belief that our best scientific theories are true. As a result, the atheist’s speech acts mislead his followers into believing that he is Christian. The scientific realists’ speech acts mislead their hearers into believing that they are Kuhnians.

Moore’s Paradox

Such speech acts raise an interesting philosophical issue. Imagine that someone says, “Snow is white, but I don’t believe snow is white.” The assertion of such a sentence involves Moore’s paradox. Moore’s paradox arises when we say a sentence of the form, “P, but I don’t believe p” (Moore, 1993: 207–212). We can push the atheist above to be caught in Moore’s paradox. Imagine that he says, “If you worship God, you’ll go to heaven.” We request him to declare whether he believes or not what he just said. He declares, “I don’t believe if you worship God, you’ll go to heaven.” As a result, he is caught in Moore’s paradox, and he only puzzles his audience.

The same is true of the scientific realists above. Imagine that they say, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” We request them to declare whether they believe or not what they just said. They declare, “I don’t believe successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” As a result, they are caught in Moore’s paradox, and they only puzzle their audience.

Kuhn would also be caught in Moore’s paradox if he draws the analogy between the development of science and the evolution of organisms without committing to the truth of evolutionary theory, pace Bryant. Imagine that Kuhn says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths. I, however, don’t believe organisms don’t evolve towards a goal.” He says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths” in order to draw the analogy between the development of science and the evolution of organisms. He says, “I, however, don’t believe organisms don’t evolve towards a goal,” in order to express his refusal to believe that evolutionary theory is true. It is, however, a Moorean sentence: “Organisms don’t evolve towards a goal. I, however, don’t believe organisms don’t evolve towards a goal.” The assertion of such a sentence gives rise to Moore’s paradox.

Scientific antirealists would also be caught in Moore’s paradox, if they explain phenomena in terms of a scientific theory without committing to the truth of it, pace van Fraassen. Imagine that scientific antirealists say, “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” They say, “The space between two galaxies expands because dark energy exists between them,” in order to explain why the space between galaxies expands.

They add, “I don’t believe that dark energy exists between two galaxies,” in order to express their refusal to commit to the truth of the theoretical claim that dark energy exists. It is, however, a Moorean sentence: “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” Asserting such a sentence will only puzzle their audience. Consequently, Moore’s paradox bars scientific antirealists from invoking scientific theories to explain phenomena (Park, 2017b: 383, 2018b: Section 4).

Researchers on Moore’s paradox believe that “contradiction is at the heart of the absurdity of saying a Moorean sentence, but it is not obvious wherein contradiction lies” (Park, 2014: 345). Park (2014: 345) argues that when you say, “Snow is white,” your audience believe that you believe that snow is white. Their belief that you believe that snow is white contradicts the second conjunct of your Moorean sentence that you do not believe that snow is white.

Thus, the contradiction lies in your audience’s belief and the second conjunct of your Moorean sentence. The present paper does not aim to flesh out and defend this view of wherein lies the contradiction. It rather aims to show that Moore’s paradox prevents us from using the language of a scientific theory without committing to the truth of it, pace Bryant and van Fraassen.

The Real Consequences of Speaking What You Don’t Believe

Set Moore’s paradox aside. Let me raise another objection to Bryant and van Fraassen. Imagine that Kuhn encounters a philosopher of mind. The philosopher of mind asserts, “A mental state is reducible to a brain state.” Kuhn realizes that the philosopher of mind espouses the identity theory of mind, but he knows that the identity theory of mind has already been refuted by the multiple realizability argument. So he brings up the multiple realizability argument to the philosopher of mind. The philosopher of mind is persuaded of the multiple realizability argument and admits that the identity theory is not tenable.

To Kuhn’s surprise, however, the philosopher of mind claims that when he said, “A mental state is reducible to a brain state,” he spoke the language of the identity theory without committing to the truth of it, so his position is not refuted by Kuhn. Note that the philosopher of mind escapes the refutation of his position by saying that he did not believe what he stated. It is also reasonable for the philosopher of mind to escape the refutation of his position by saying that he did not believe what he stated, if it is reasonable for Kuhn to escape the refutation of his position by saying that he did not believe what he stated. Kuhn would think that it is not reasonable for the philosopher of mind to do so.

Kuhn, however, might bite the bullet, saying that it is reasonable for the philosopher of mind to do so. The strategy to avoid the refutation, Kuhn might continue, only reveals that the identity theory was not his position after all. Evaluating arguments does not require that we identify the beliefs of the authors of arguments. In philosophy, we only need to care about whether arguments are valid or invalid, sound or unsound, strong or weak, and so on.

Speculating about what beliefs the authors of arguments hold as a way of evaluating arguments is to implicitly rely on an argument from authority, i.e., it is to think as though the authors’ beliefs determine the strength of arguments rather than the form and content of arguments do.

We, however, need to consider under what conditions we accept the conclusion of an argument in general. We accept it, when premises are plausible and when the conclusion follows from the premises. We can tell whether the conclusion follows from the premises or not without the author’s belief that it does. In many cases, however, we cannot tell whether premises are plausible or not without the author’s belief that they are.

Imagine, for example, that a witness states in court that a defendant is guilty because the defendant was in the crime scene. The judge can tell whether the conclusion follows from the premise or not without the witness’s belief that it does. The judge, however, cannot tell whether the premise is plausible or not without the witness’s belief that it is. Imagine that the witness says that the defendant is guilty because the defendant was in the crime scene, but that the witness declares that he does not believe that the defendant was in the crime scene. Since the witness does not believe that the premise is true, the judge has no reason to believe that it is true. It is unreasonable for the judge to evaluate the witness’s argument independently of whether the witness believes or not that the premise is true.

In a nutshell, an argument loses its persuasive force, if the author of the argument does not believe that premises are true. Thus, if you aim to convince your audience that your argument is cogent, you should believe yourself that the premises are true. If you declare that you do not believe that the premises are true, your audience will ask you some disconcerting questions: “If you don’t, why should I believe what you don’t? How can you say to me what you don’t believe? Do you expect me to believe what you don’t?” (Park, 2018b: Section 4).

In case you still think that it is harmless and legitimate to speak what you do not believe, I invite you to imagine that your political rival commits murder to frame you. A false charge is brought to you, and you are tried in court. The prosecutor has a strong indictment against you. You state vehemently that you did not commit murder. You, however, have no physical evidence supporting your statement. Furthermore, you are well-known as a person who speaks vehemently what you do not believe. Not surprisingly, the judge issues a death sentence on you, thinking that you are merely speaking the language of the innocent. The point of this sad story is that speaking what you do not believe may result in a tragedy in certain cases.

A Solution With a Prestigious Inspiration

Let me now turn to a slightly different, but related, issue. Under what condition can I refute your belief when you speak contrary to what you believe? I can do it only when I have direct access to your doxastic states, i.e., only when I can identify your beliefs without the mediation of your language. It is not enough for me to interpret your language correctly and present powerful evidence against what your language conveys.

After all, whenever I present such evidence to you, you will escape the refutation of what you stated simply by saying that you did not believe what you stated. Thus, Bryant’s defense of Kuhn’s position from my criticism above amounts to imposing an excessively high epistemic standard on Kuhn’s opponents. After all, his opponents do not have direct access to his doxastic states.

In this context, it is useful to be reminded of the epistemic imperative: “Act only on an epistemic maxim through which you can at the same time will that it should become a universal one” (Park, 2018c: 3). Consider the maxim “Escape the refutation of your position by saying you didn’t believe what you stated.” If you cannot will this maxim to become a universal one, you ought not to act on it yourself. It is immoral for you to act on the maxim despite the fact that you cannot will it to become a universal maxim. Thus, the epistemic imperative can be invoked to argue that Kuhn ought not to use the language of evolutionary theory without committing to the truth of it, pace Bryant.

Let me now raise a slightly different, although related, issue. Recall that according to Bryant, Kuhn could adopt the language of evolutionary theory without committing to the truth of it. Admittedly, there is an epistemic advantage of not committing to the truth of evolutionary theory on Kuhn’s part. The advantage is that he might avoid the risk of forming a false belief regarding evolutionary theory. Yet, he can stick to his philosophical account of science according to which science does not develop towards truths, and current scientific theories will be supplanted by incommensurable alternatives.

There is, however, an epistemic disadvantage of not committing to the truth of a scientific theory. Imagine that Kuhn is not only a philosopher and historian of science but also a scientist. He has worked hard for several decades to solve a scientific problem that has been plaguing an old scientific theory. Finally, he hits upon a great scientific theory that handles the recalcitrant problem. His scientific colleagues reject the old scientific theory and accept his new scientific theory, i.e., a scientific revolution occurs.

He becomes famous not only among scientists but also among the general public. He is so excited about his new scientific theory that he believes that it is true. Some philosophers, however, come along and dispirit him by saying that they do not believe that his new theory is true, and that they do not even believe that it is closer to the truth than its predecessor was. Kuhn protests that his new theory has theoretical virtues, such as accuracy, simplicity, and fruitfulness. Not impressed by these virtues, however, the philosophers reply that science does not develop towards truths, and that his theory will be displaced by an incommensurable alternative. They were exposed to Kuhn’s philosophical account of science!

Epistemic Reciprocation

They have adopted a philosophical position called epistemic reciprocalism according to which “we ought to treat our epistemic colleagues, as they treat their epistemic agents” (Park, 2017a: 57). Epistemic reciprocalists are scientific antirealists’ true adversaries. Scientific antirealists refuse to believe that their epistemic colleagues’ scientific theories are true for fear that they might form false beliefs.

In return, epistemic reciprocalists refuse to believe that scientific antirealists’ positive theories are true for fear that they might form false beliefs. We, as epistemic agents, are not only interested in avoiding false beliefs but also in propagating “to others our own theories which we are confident about” (Park, 2017a: 58). Scientific antirealists achieve the first epistemic goal at the cost of the second epistemic goal.

Epistemic reciprocalism is built upon the foundation of social epistemology, which claims that we are not asocial epistemic agents but social epistemic agents. Social epistemic agents are those who interact with each other over the matters of what to believe and what not to believe. So they take into account how their interlocutors treat their epistemic colleagues before taking epistemic attitudes towards their interlocutors’ positive theories.

Let me now turn to another of Bryant’s defenses of Kuhn’s position. She says that it is not clear that the analogy between the evolution of organisms and the development of science is integral to Kuhn’s account. Kuhn could “have ascribed the same characteristics to theory change without referring to evolutionary theory at all” (Bryant, 2018: 3). In other words, Kuhn’s contention that science does not develop towards truths rises or falls independently of the analogy between the development of science and the evolution of organisms. Again, this defense of Kuhn’s position is brilliant.

Consider, however, that the development of science is analogous to the evolution of organisms, regardless of whether Kuhn makes use of the analogy to defend his philosophical account of science or not, and that the fact that they are analogous is a strike against Kuhn’s philosophical account of science. Suppose that Kuhn believes that science does not develop towards truths, but that he does not believe that organisms do not evolve towards a goal, despite the fact that the development of science is analogous to the evolution of organisms.

An immediate objection to his position is that it is not clear on what grounds he embraces the philosophical claim about science, but not the scientific claim about organisms, when the two claims parallel each other. It is ad hoc merely to suggest that the scientific claim is untrustworthy, but that the philosophical claim is trustworthy. What is so untrustworthy about the scientific claim, but so trustworthy about the philosophical claim? It would be difficult to answer these questions because the development of science and the evolution of organisms are similar to each other.

A moral is that if philosophers reject our best scientific theories, they cannot make philosophical claims that are similar to what our best scientific theories assert. In general, the more philosophers reject scientific claims, the more impoverished their philosophical positions will be, and the heavier their burdens will be to prove that their philosophical claims are dissimilar to the scientific claims that they reject.

Moreover, it is not clear what Kuhn could say to scientists who take the opposite position in response to him. They believe that organisms do not evolve towards a goal, but refuse to believe that science does not develop towards truths. To go further, they trust scientific claims, but distrust philosophical claims. They protest that it is a manifestation of philosophical arrogance to suppose that philosophical claims are worthy of beliefs, but scientific claims are not.

This possible response to Kuhn reminds us of the Golden Rule: Treat others as you want to be treated. Philosophers ought to treat scientists as they want to be treated, concerning epistemic matters. Suppose that a scientific claim is similar to a philosophical claim. If philosophers do not want scientists to hold a double standard with respect to the scientific and philosophical claims, philosophers should not hold a double standard with respect to them.

There “is no reason for thinking that the Golden Rule ranges over moral matters, but not over epistemic matters” (Park, 2018d: 77–78). Again, we are not asocial epistemic agents but social epistemic agents. As such, we ought to behave in accordance with the epistemic norms governing the behavior of social epistemic agents.

Finally, the present paper is intended to be critical of Kuhn’s philosophy of science while enshrining his insight that science is a social enterprise, and that scientists are social epistemic agents. I appealed to Moore’s paradox, epistemic reciprocalism, the epistemic imperative, and the Golden Rule in order to undermine Bryant’s defenses of Kuhn’s position from my criticism. All these theoretical resources can be used to increase our understanding of science as a social endeavor. Let me add to Kuhn’s insight that philosophers are also social epistemic agents.

Contact details: nature@unist.ac.kr

References

Arnold, Markus. “Is There Anything Wrong with Thomas Kuhn?”, Social Epistemology Review and Reply Collective 7, no. 5 (2018): 42–47.

Byrant, Amanda. “Each Kuhn Mutually Incommensurable”, Social Epistemology Review and Reply Collective 7, no. 6 (2018): 1–7.

Dawes, Gregory. “Belief is Not the Issue: A Defence of Inference to the Best Explanation”, Ratio: An International Journal of Analytic Philosophy 26, no. 1 (2013): 62–78.

Dellsén, Finnur. “Understanding without Justification or Belief”, Ratio: An International Journal of Analytic Philosophy (2016). DOI: 10.1111/rati.12134.

Kuhn, Thomas. The Structure of Scientific Revolutions. 2nd ed. The University of Chicago Press, (1962/1970).

Mizrahi, Moti. “Introduction”, In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018): 1–22.

Moore, George. “Moore’s Paradox”, In G.E. Moore: Selected Writings. Baldwin, Thomas (ed.), London: Routledge, (1993).

Park, Seungbae. “On the Relationship between Speech Acts and Psychological States”, Pragmatics and Cognition 22, no. 3 (2014): 340–351.

Park, Seungbae. “Accepting Our Best Scientific Theories”, Filosofija. Sociologija 26, no. 3 (2015): 218–227.

Park, Seungbae. “Defense of Epistemic Reciprocalism”, Filosofija. Sociologija 28, no. 1 (2017a): 56–64.

Park, Seungbae. “Understanding without Justification and Belief?” Principia: An International Journal of Epistemology 21, no. 3 (2017b): 379–389.

Park, Seungbae. “Can Kuhn’s Taxonomic Incommensurability Be an Image of Science?” In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018a): 61–74.

Park, Seungbae. “Should Scientists Embrace Scientific Realism or Antirealism?”, Philosophical Forum (2018b): (to be assigned).

Park, Seungbae. “In Defense of the Epistemic Imperative”, Axiomathes (2018c). DOI: https://doi.org/10.1007/s10516-018-9371-9.

Park, Seungbae. “The Pessimistic Induction and the Golden Rule”, Problemos 93 (2018d): 70–80.

van Fraassen, Bas. The Scientific Image. Oxford: Oxford University Press, (1980).

Winther, Rasmus. “A Dialogue”, Metascience 18 (2009): 370–379.

Author Information: Kristina Rolin, University of Helsinki, kristina.rolin@helsinki.fi

Rolin, Kristina. “‘Facing the Incompleteness of Epistemic Trust’ — A Critical Reply.” Social Epistemology Review and Reply Collective 3, no. 5 (2014): 74-78.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1qU

Please refer to:

Recent years have witnessed an emergence of a novel specialty in social epistemology: the social epistemology of research groups. Within this specialty there are two approaches to understanding the epistemic structure of scientific collaboration. Some philosophers suggest that scientific knowledge emerging in collaborations includes collective beliefs or acceptances (Andersen 2010; Bouvier 2004; Cheon 2013; Gilbert 2000; Rolin 2010; Staley 2007; Wray 2006, 2007). Some others suggest that the epistemic structure of scientific collaboration is based on relations of trust among scientists (Andersen and Wagenknecht 2013; Fagan 2011, 2012; Frost-Arnold 2013; Hardwig 1991; Kusch 2002; de Ridder 2013; Thagard 2010; Wagenknecht 2013). In the former case, a research team is thought to arrive at a group view which is not fully reducible to individual views. In the latter case, each team member is thought to rely on testimonial knowledge which is based on her trusting other team members. These two models are not exclusive and competing accounts of the epistemic structure of scientific collaboration. They can be seen as two parallel models for understanding the special nature of scientific knowledge produced in collaborations. Sometimes scientific knowledge in collaborations takes the form of collective acceptance, sometimes it is an outcome of trust-based acceptance, and at other times it takes some other form.  Continue Reading…