Scientism in the Philosophy of Implicit Bias Research, Part 1, Kamili Posey

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink: https://wp.me/p1Bfg0-41m

Image by Walt Stoneburner via Flickr / Creative Commons

 

If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.



Categories: Articles

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

1 reply

  1. What is happening with recent loose & fast treatment of ideological ‘scientism’ at SERRC? If Steve Fuller’s ‘social epistemology’ is supposed to serve as a kind of ‘role model’ for people here, let’s look at his definitions. From The Proactionary Imperative, Fuller defines ‘scientism’ as “the steering of science for specific political ends.” (24) From The New Sociological Imagination, he says “the application of natural science theories and methods to social phenomena [is] a fairly straightforward case of what might be called ‘reductionism’ or ‘scientism’.” (26) Likewise, he suggests “scientism dissipated [the concept of humanity] into a species-indifferent respect for life.” (156) In Humanity 2.0, he follows a more specific definition of ‘scientism,’ one that I have been endorsing for several years, saying that it means “overextending the scientific method.” (45) However, this definition seems rather strange coming from him because Fuller already rejects the notion of a SINGULAR method that holds across all sciences. He speaks usually of ‘methods’ of science, not a single universal method. And while over-extending or ‘exaggerating’ science is what Fuller indicates, he also says that “science itself might become a kind of religion (aka scientism),” (186), which seems to be another way for him to not talk about ideology directly.

    If comments are the only ‘review’ at SERRC, then let this serve as a quickie.
    1) The author writes: “I am not a social scientist, so the operative word here is ‘intuitive’.” Yes, this paper is obviously not ‘scientific’ in any sense of the word. Lack of rigorous or simply adequate operational definitions, however, isn’t just a weakness in scientific work. Shouldn’t a philosopher speak clearly about terms too, especially about ones in the title of their paper?

    2) The author defines the term ‘scientism,’ apparently central to the paper as “the application of tentative results to philosophical projects.” Come again? Where does the author get this curiously minimal & loose definition from? It seems almost as confusing as Mizrahi’s recent attempt here to force a new ‘weak scientism’ into our vocabularies, one that was soundly rejected. In the last paragraph, the author sets up a throwaway definition that she calls ‘willful scientism,’ suggesting thus that some people might be pushing ideological scientism ‘unwillfully’ or against their own will, though she does not expand on this provocative claim. The phrasing is indirect, but she seems to define ‘scientism’ as “the belief that science can cure all of our social and political ills,” and says quite rightly, “this is magical thinking.” Yet who actually thinks this way? Scientism as the ‘magic cure’ isn’t even Stephen Pinker’s claim! Can the author therefore please provide examples of people who think the way she defines ‘scientism’?

    The article puts forward a suggestive ‘philosophy of science’ that seems rather like a straw person than an argument that bears out in relevant literature. Indeed, one might actually suggest Fuller himself adheres to ‘scientism’ if viewed this way.

Leave a Reply to GregoryCancel reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading