Archives For biases

Author Information: Nuria Anaya-Reig, Universidad Rey Juan Carlos, nuria.anaya@urjc.es

Anaya-Reig, Nuria. “Teorías Implícitas del Investigador: Un Campo por Explorar Desde la Psicología de la Ciencia.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 36-41.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-434

Image by Joan via Flickr / Creative Commons

 

This article is a Spanish-language version of Nuria Anaya-Reig’s earlier contribution, written by the author herself:

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

¿Qué concepciones tienen los investigadores sobre las características que debe reunir un estudiante para ser considerado un potencial buen científico? ¿En qué medida influyen esas creencias en la selección de candidatos? Estas son las preguntas fundamentales que laten en el trabajo de Caitlin Donahue Wylie (2018). Mediante un estudio cualitativo de tipo etnográfico, se entrevista a dos profesores de ingeniería en calidad de investigadores principales (IP) y a estudiantes de sendos grupos de doctorado, la mayoría graduados, como investigadores noveles. En total, la muestra es de 27 personas.

Los resultados apuntan a que, entre este tipo de investigadores, es común creer que el interés, la asertividad y el entusiasmo por lo que se estudia son indicadores de un futuro buen investigador. Además, los entrevistados consideran que el entusiasmo está relacionado con el deseo de aprender y la ética en el trabajo. Finalmente, se sugiere una posible exclusión no intencional en la selección de investigadores a causa de la aplicación involuntaria de sesgos por parte del IP, relativa a la preferencia de características propias de grupos mayoritarios (tales como etnia, religión o sexo), y se proponen algunas ideas para ayudar a minimizarlos.

Teorías Implícitas en los Sótanos de la Investigación

En esencia, el trabajo de Wylie (2018) muestra que el proceso de selección de nuevos investigadores por parte de científicos experimentados se basa en teorías implícitas. Quizás a simple vista puede parecer una aportación modesta, pero la médula del trabajo es sustanciosa y no carece de interés para la Psicología de la Ciencia, al menos por tres razones.

Para empezar, porque estudiar tales cuestiones constituye otra forma de aproximarse a la compresión de la psique científica desde un ángulo distinto, ya que estudiar la psicología del científico es uno de los ámbitos de estudio centrales de esta subdisciplina (Feist 2006). En segundo término, porque, aunque la pregunta de investigación se ocupa de una cuestión bien conocida por la Psicología social y, en consecuencia, aunque los resultados del estudio sean bastante previsibles, no dejan de ser nuevos datos y, por tanto, valiosos, que enriquecen el conocimiento teórico sobre las ideas implícitas: es básico en ciencia, y propio del razonamiento científico, diferenciar teorías de pruebas (Feist 2006).

En último lugar, porque la Psicología de la Ciencia, en su vertiente aplicada, no puede ignorar el hecho de que las creencias implícitas de los científicos, si son erróneas, pueden tener su consiguiente reflejo negativo en la población de investigadores actual y futura (Wylie 2018).

Ya Santiago Ramón y Cajal, en su faceta como psicólogo de la ciencia (Anaya-Reig and Romo 2017), reflexionaba sobre este asunto hace más de un siglo. En el capítulo IX, “El investigador como maestro”, de su obra Reglas y consejos sobre investigación científica (1920) apuntaba:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

A Vueltas con las Teorías Implícitas

Recordemos brevemente que las teorías ingenuas o implícitas son creencias estables y organizadas que las personas hemos elaborado intuitivamente, sin el rigor del método científico. La mayoría de las veces se accede a su contenido con mucha dificultad, ya que la gente desconoce que las tiene, de ahí su nombre. Este hecho no solo dificulta una modificación del pensamiento, sino que lleva a buscar datos que confirmen lo que se piensa, es decir, a cometer sesgos confirmatorios (Romo 1997).

Las personas vamos identificando y organizando las regularidades del entorno gracias al aprendizaje implícito o incidental, basado en el aprendizaje asociativo, pues necesitamos adaptarnos a las distintas situaciones a las que nos enfrentamos. Elaboramos teorías ingenuas que nos ayuden a comprender, anticipar y manejar de la mejor manera posible las variadas circunstancias que nos rodean. Vivimos rodeados de una cantidad de información tan abrumadora, que elaborar teorías implícitas, aprendiendo qué elementos tienden a presentarse juntos, constituye una forma muy eficaz de hacer el mundo mucho más predecible y controlable, lo que, naturalmente, incluye el comportamiento humano.

De hecho, el contenido de las teorías implícitas es fundamentalmente de naturaleza social (Wegner and Vallacher 1977), como muestra el hecho de que buena parte de ellas pueden agruparse dentro las llamadas Teorías Implícitas de la Personalidad (TIP), categoría a la que, por cierto, bien pueden adscribirse las creencias de los investigadores que nos ocupan.

Las TIP se llaman así porque su contenido versa básicamente sobre cualidades personales o rasgos de personalidad y son, por definición, idiosincráticas, si bien suele existir cierta coincidencia entre los miembros de un mismo grupo social.

Entendidas de modo amplio, pueden definirse como aquellas creencias que cada persona tiene sobre el ser humano en general; por ejemplo, pensar que el hombre es bueno por naturaleza o todo lo contrario. En su acepción específica, las TIP se refieren a las creencias que tenemos sobre las características personales que suelen presentarse juntas en gente concreta. Por ejemplo, con frecuencia presuponemos que un escritor tiene que ser una persona culta, sensible y bohemia (Moya 1996).

Conviene notar también que las teorías implícitas se caracterizan frente a las científicas por ser incoherentes y específicas, por basarse en una causalidad lineal y simple, por componerse de ideas habitualmente poco interconectadas, por buscar solo la verificación y la utilidad. Sin embargo, no tienen por qué ser necesariamente erróneas ni inservibles (Pozo, Rey, Sanz and Limón 1992). Aunque las teorías implícitas tengan una capacidad explicativa limitada, sí tienen capacidad descriptiva y predictiva (Pozo Municio 1996).

Algunas Reflexiones Sobre el Tema

Científicos guiándose por intuiciones, ¿cómo es posible? Pero, ¿por qué no? ¿Por qué los investigadores habrían de comportarse de un modo distinto al de otras personas en los procesos de selección? Se comportan como lo hacemos todos habitualmente en nuestra vida cotidiana con respecto a los más variados asuntos. Otra manera de proceder resultaría para cualquiera no solo poco rentable, en términos cognitivos, sino costoso y agotador.

A fin de cuentas, los investigadores, por muy científicos que sean, no dejan de ser personas y, como tales, buscan intuitivamente respuestas a problemas que, si bien condicionan de modo determinante los resultados de su labor, no son el objeto en sí mismo de su trabajo.

Por otra parte, tampoco debe sorprender que diferentes investigadores, poco o muy experimentados, compartan idénticas creencias, especialmente si pertenecen al mismo ámbito, pues, según se ha apuntado, aunque las teorías implícitas se manifiestan en opiniones o expectativas personales, parte de su contenido tácito es compartido por numerosas personas (Runco 2011).

Todo esto lleva, a su vez, a hacer algunas otras observaciones sobre el trabajo de Wylie (2018). En primer lugar, tratándose de teorías implícitas, más que sugerir que los investigadores pueden estar guiando su selección por un sesgo perceptivo, habría que afirmarlo. Como se ha apuntado, las teorías implícitas operan con sesgos confirmatorios que, de hecho, van robusteciendo sus contenidos.

Otra cuestión es preguntarse con qué guarda relación dicho sesgo: Wylie (2018) sugiere que está relacionado con una posible preferencia por las características propias de los grupos mayoritarios a los que pertenecen los IP basándose en algunos estudios que han mostrado que en ciencia e ingeniería predominan hombres, de raza blanca y de clase media, lo que puede contribuir a recibir mal a aquellos estudiantes que no se ajusten a estos estándares o que incluso ellos mismos abandonen por no sentirse cómodos.

Sin duda, esa es una posible interpretación; pero otra es que el sesgo confirmatorio que muestran estos ingenieros podría deberse a que han observado esos rasgos las personas que han  llegado a ser buenas en su disciplina, en lugar de estar relacionado con su preferencia por interactuar con personas que se parecen física o culturalmente a ellos.

Es oportuno señalar aquí nuevamente que las teorías implícitas no tienen por qué ser necesariamente erróneas, ni inservibles (Pozo, Rey, Sanz and Limón 1992). Es lo que ocurre con parte de las creencias que muestra este grupo de investigadores: ¿acaso los científicos, en especial los mejores, no son apasionados de su trabajo?, ¿no dedican muchas horas y mucho esfuerzo a sacarlo adelante?, ¿no son asertivos? La investigación ha establecido firmemente (Romo 2008) que todos los científicos creativos muestran sin excepción altas dosis de motivación intrínseca por la labor que realizan.

Del mismo modo, desde Hayes (1981) sabemos que se precisa una media de 10 años para dominar una disciplina y lograr algo extraordinario. También se ha observado que muestran una gran autoconfianza y que son espacialmente arrogantes y hostiles. Es más, se sabe que los científicos, en comparación con los no científicos, no solo son más asertivos, sino más dominantes, más seguros de sí mismos, más autónomos e incluso más hostiles (Feist 2006). Varios trabajos, por ejemplo, el de Feist y Gorman (1998), han concluido que existen diferencias en los rasgos de personalidad entre científicos y no científicos.

Pero, por otro lado, esto tampoco significa que las concepciones implícitas de la gente sean necesariamente acertadas. De hecho, muchas veces son erróneas. Un buen ejemplo de ello es la creencia que guía a los investigadores principales estudiados por Wylie para seleccionar a los graduados en relación con sus calificaciones académicas. Aunque dicen que las notas son un indicador insuficiente, a continuación matizan su afirmación: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

Sin embargo, la evidencia empírica muestra que ni las puntuaciones altas en grados ni en pruebas de aptitud predicen necesariamente el éxito en carreras científicas (Feist 2006) y que el genio creativo no está tampoco necesariamente asociado con el rendimiento escolar extraordinario y, lo que es más, numerosos genios han sido estudiantes mediocres (Simonton 2006).

Conclusión

La Psicología de la Ciencia va acumulando datos para orientar en la selección de posibles buenos investigadores a los científicos interesados: véanse, por ejemplo, Feist (2006) o Anaya-Reig (2018). Pero, ciertamente, a nivel práctico, estos conocimientos serán poco útiles si aquellos que más partido pueden sacarles siguen anclados a creencias que pueden ser erróneas.

Por tanto, resulta de interés seguir explorando las teorías implícitas de los investigadores en sus diferentes disciplinas. Su explicitación es imprescindible como paso inicial, tanto para la Psicología de la Ciencia si pretende que ese conocimiento cierto acumulado tenga repercusiones reales en los laboratorios y otros centros de investigación, como para aquellos científicos que deseen adquirir un conocimiento riguroso sobre las cualidades propias del buen investigador.

Todo ello teniendo muy presente que la naturaleza implícita de las creencias personales dificulta el proceso, porque, como se ha señalado, supone que el sujeto entrevistado desconoce a menudo que las posee (Pozo, Rey, Sanz and Limón 1992), y que su modificación requiere, además, un cambio de naturaleza conceptual o representacional (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Por último, tal vez no sea razonable promover entre todos los universitarios de manera general ciertas habilidades, sin tener en consideración que reúnen determinados atributos. Por obvio que sea, hay que recordar que los recursos educativos, como los de cualquier tipo, son necesariamente limitados. Si, además, sabemos que solo un 2% de las personas se dedican a la ciencia (Feist 2006), quizás valga más la pena poner el esfuerzo en mejorar la capacidad de identificar con tino a aquellos que potencialmente son válidos. Otra cosa sería como tratar de entrenar para cantar ópera a una persona que no tiene cualidades vocales en absoluto.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Nuria Anaya-Reig, Rey Juan Carlos University, nuria.anaya@urjc.es.

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42K

From the 2014 White House Science Fair.
Image by NASA HQ Photo via Flickr / Creative Commons

 

This essay is in reply to:

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

What traits in a student do researchers believe characterize a good future scientist? To what degree do these beliefs influence the selection of candidates? These are fundamental questions that resonate in the work of Caitlin Donahue Wylie (2018). As part of a qualitative ethnographic study, an interview was given to two engineering professors working as principal investigators (PIs), as well as to their respective groups of graduate students, most of whom were already working as new researchers. The total sample consisted of 27 people.

Results indicate that, among this class of researchers, interest, assertiveness, and enthusiasm for one’s own field of study are commonly regarded as key signs of a good future researcher. Moreover, the interviewees believe enthusiasm to be related to a desire to learn and a strong work ethic. Lastly, the research suggests that possible, unintentional exclusions may occur during candidate selection due to biases on the part of the PIs, reflecting preferences for features belonging to majority groups (such as ethnicity, religion and gender). This essay offers some ideas that may help minimize such biases.

Implicit Theories Undergirding Research

Essentially, the work of Wylie (2018) demonstrates that experienced scientists base their selection process for new researchers on implicit theories. While this may at first appear to be a rather modest contribution, the core of Wylie’s research is substantial and of great relevance to the psychology of science for at least three reasons.

First, studying such matters offers different angle from which to investigate and attempt to understand the scientific psyche: studying the psychology of scientists is one of the central areas of research in this subdiscipline (Feist 2006). Second, although the research question addresses a well-known issue in social psychology and the results of the study are thus quite predictable, the latter nevertheless constitute new data and are therefore valuable in their own right. Indeed, they enrich theoretical knowledge about implicit ideas given that, in science and scientific reasoning, it is essential to differentiate between tests and theories (Feist 2006).

Finally, because in the way it is currently being applied, the psychology of science cannot turn a blind eye to the fact that if scientists’ implicit beliefs are mistaken, those beliefs may have negative repercussions for the population of current and future researchers (Wylie 2018).

In his role as psychologist of science (Anaya-Reig and Romo 2017), Ramón y Cajal mused upon this issue over a century ago. In “The Investigator as Teacher,” chapter IX of his work Reglas y consejos sobre investigación científica (1920), he noted:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

[What signs identify creative talent and an irrevocable calling for scientific research?]

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

[This serious and fundamentally important question has been discussed at length by deep thinkers and noted teachers, without coming to any real conclusions. The problem is even more difficult when taking into account the fact that it is not enough to find capable and clear-sighted and capable minds for laboratory research; they must also be genuine converts to the worship of original data.]

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

[Are future scientists—the goal of our educational vigilance—found by chance among the most serious students who work diligently, those who win prizes and competitions?]

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

[Sometimes, but not always. If the rule were infallible, the teacher’s work would be easy. He could simply focus his efforts on the outstanding prizewinners among the degree candidates, and on those at the top of the list in professional competitions. But reality often takes pleasure in laughing at predictions and in blasting hopes. (Ramón y Cajal 1999, 141)]

Returning to Implicit Theories

Let us briefly recall that naïve or implicit theories are stable and organized beliefs that people have formed intuitively, without the rigor of the scientific method; their content can be accessed only with great difficulty, given that people are unaware that they have them. This makes not only modifying them difficult but also leads those who possess them to search for facts that confirm what they already believe or, in other words, to fall prey to confirmation bias (Romo 1997).

People tend to identify and organize regularities in their environment thanks to implicit or incidental learning, which is based on associative learning, due to the need to adapt to the varying situations with which we are faced. We formulate naïve theories that help us comprehend, anticipate and deal with the disparate situations confronting us in the best way possible. Indeed, we are surrounded by a such an overwhelming amount of information that formulating implicit theories, learning which things seem to appear together at the same time, is a very effective way of making the world more predictable and controllable.

Naturally, human behavior is no exception to this rule. In fact, the content of implicit theories is fundamentally of a social nature (Wegner and Vallacher 1977), as is revealed by the fact that a good portion of such theories take the form of so-called Implicit Personality Theories (IPT), a category to which the beliefs of the researchers under consideration here also belong.

IPTs get their name because their content consists of personal qualities or personality traits. They are idiosyncratic, even if there indeed are certain coincidences among members of the same social group.

Understood broadly, IPTs can be defined as those beliefs that everyone has about human beings in general; for example, that man is by nature good, or just the opposite. Defined more precisely, IPTs refer to those beliefs that we have about the personal characteristics of specific types of people. For example, we frequently assume that a writer need be a cultured, sensitive and bohemian sort of person (Moya 1996).

It should be noted that implicit theories, in contrast to those of a scientific nature, are also characterized by their specificity and incoherence, given that they are based on simple, linear coincidences, are composed of ideas that are habitually interconnected, and seek only verification and utility. Still, this does not necessarily mean that such ideas are necessarily mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Although implicit theories have a limited explanatory power, they do have descriptive and predictive capacities (Pozo Municio 1996).

Some Reflections on the Subject

Scientists being led by their intuitions…what is going on? Then again, what is wrong with that? Why must researchers behave differently from other people when engaged in selection processes? Scientists behave as we all do in our daily lives when it comes to all sorts of things. Any other way of proceeding would not just be unprofitable but also would be, in cognitive terms, costly and exhausting.

All things considered, researchers, no matter how rigorously scientific they may be, are still people and as such intuitively seek out answers to problems which influence their labor in specific ways while not in themselves being the goal of their work.

Moreover, we should not be surprised either when different researchers, whether novice or seasoned, share identical beliefs, especially if they work within the same field, since, as noted above, although implicit theories reveal themselves in opinions or personal expectations, part of their tacit content is shared by many people (Runco 2011).

The above leads one, in turn, to make further observations about the work of Wylie (2018). In the first place, as for implicit theories, rather than simply suggesting that researchers’ selections may be guided by a perceptual bias, it must be affirmed that this indeed is the case. As has been noted, implicit theories operate with confirmation biases which in fact reinforce their content.

Another matter is what sorts of biases these are: Wylie (2018) suggests that they often take the form of a possible preference for certain features that are characteristic of the majority groups to which the PIs belong, a conclusion based on several studies showing that white, middle-class men predominate in the fields of science and engineering, which may cause them to react poorly to students who do not meet those standards and indeed may even lead to the latter giving up because of the discomfort they feel in such environments.

This is certainly one possible interpretation; another is that the confirmation bias exhibited by these researchers might arise because they have observed such traits in people who have achieved excellence in their field and therefore may not, in fact, be the result of a preference for interacting with people who resemble them physical or culturally.

It is worth noting here that implicit theories need not be mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Indeed, this is certainly true for the beliefs held by the group of researchers. Aren’t scientists, especially the best among them, passionate about their work? Do they not dedicate many hours to it and put a great deal of effort into carrying it out? Are they not assertive? Research has conclusively shown (Romo 2008) that all creative scientists, without exception, exhibit high levels of intrinsic motivation when it comes to the work that they do.

Similarly, since Hayes (1981) we have known that it takes an average of ten years to master a discipline and achieve something notable within it. It has also been observed that researchers exhibit high levels of self-confidence and tend to be arrogant and aggressive. Indeed, it is known that scientists, as compared to non-scientists, are not only more assertive but also more domineering, more self-assured, more self-reliant and even more hostile (Feist 2006). Several studies, like that of Feist and Gorman (1998) for example, have concluded that there are differences in personality traits between scientists and non-scientists.

On the other hand, this does not mean that people’s implicit ideas are necessarily correct. In fact, they are often mistaken. A good example of this is one belief that guided those researchers studied by Wylie as they selected graduates according to their academic credentials. Although they claimed that grades were an insufficient indicator, they then went on to qualify that claim: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

However, the empirical evidence shows that neither high grades nor high scores on aptitude tests are reliable predictors of a successful scientific career (Feist 2006). The evidence also suggests that creative genius is not necessarily associated with academic performance. Indeed, many geniuses were mediocre students (Simonton 2006).

Conclusion

The psychology of science continues to amass data to help orient the selection of potentially good researchers for those scientists interested in recruiting them: see, for example Feist (2006) or Anaya-Reig (2018). At the practical level, however, this knowledge will be of little use if those who are best able to benefit from it continue to cling to beliefs that may be mistaken.

Therefore, it is of great interest to keep exploring the implicit theories held by researchers in different disciplines. Making them explicit is an essential first step both for the psychology of science, if that discipline’s body of knowledge is to have practical repercussions in laboratories as well as other research centers, as well as for those scientists who wish to acquire rigorous knowledge about what inherent qualities make a good researcher, all while keeping in mind that the implicit nature of personal beliefs makes such a process difficult.

As noted above, subjects who are interviewed are often unaware that they possess them (Pozo, Rey, Sanz and Limón 1992). Moreover, modifying them requires a change of a conceptual or representational nature (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Lastly, it may perhaps be unreasonable to promote certain skills among university students in general without considering the aptitudes necessary for acquiring them. Although it may be obvious, it should be remembered that educational resources, like those of all types, are necessarily limited. Since we know that only 2% of the population devotes itself to science (Feist 2006), it may very well be more worthwhile to work on improving our ability to target those students who have potential. Anything else would be like trying to train a person who has no vocal talent whatsoever to sing opera.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink: https://wp.me/p1Bfg0-41k

Image by Rigoberto Garcia via Flickr / Creative Commons

 

In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink: https://wp.me/p1Bfg0-41m

Image by Walt Stoneburner via Flickr / Creative Commons

 

If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.

Author Information: Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “Conscientiousness and Other Problems: A Reply to Zagzabski.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 10-13.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Sr

Please refer to:

We’d first like to thank Dr. Zagzebski for engaging with our review of Epistemic Authority. We want to extend the dialogue by offering brief comments on several issues that she raised.

Conscientiousness

In our review we brought up the case of a grieving father who simply could not believe that his son had died despite conclusive evidence to the contrary. This case struck us as a problem case for Zagzebki’s account of rationality. For Zagzebski, rationality is a matter of conscientiousness, and conscientiousness is a matter of using your faculties as best you can to get to truth, where the best guide for a belief’s truth is its surviving conscientious reflection. The problem raised by the grieving father is that his belief that his son is still alive will continuously survive his conscientious reflection (since he is psychologically incapable of believing otherwise) yet it is clearly an irrational belief. In her response, Zagzebski makes the following claims,

(A) “To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.” (57)

and,

(B) “a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p.” (57)

These claims about the case lead to a dilemma. Either conscientiousness is to be understood subjectively or objectively, and either way we see some issues. First, if we understand conscientiousness subjectively, then the father seems to pass the test. We can suppose that he is doing the best he can to believe truths, but the psychological stability of this one belief causes the dissonance to be resolved in atypical ways. So, on a subjective construal of conscientiousness, he is conscientious and his belief about his son has survived conscientious reflection.

We can stipulate that the father is doing the best he can with what he has, yet his belief is irrational. Zagzebski’s (B) above seems to fit a subjective understanding of conscientiousness and leads to such a verdict. This is also how we read her in Epistemic Authority more generally. Second, if we understand conscientiousness objectively, then it follows that the father is not being conscientious. There are objectively better ways to resolve his psychic dissonance even if they are not psychologically open to him.

So, the objective understanding of conscientiousness does not give the verdict that the grieving father is rational. Zagzebski’s (A) above fits with an objective understanding of conscientiousness. The problem with the objective understanding of conscientiousness is that it is much harder to get a grasp on what it is. Doing the best you can with what you have, has a clear meaning on the subjective level and gives a nice responsibilist account of conscientiousness. However, when we abstract away from the subject’s best efforts and the subject’s faculties, how should we understand conscientiousness? Is it to believe in accordance with what an ideal epistemic agent would conscientiously believe?

To us, while the objective understanding of conscientiousness avoids the problem, it comes with new problems, chief among which is a fleshed out concept of conscientiousness, so understood. In addition, the objective construal of conscientiousness also does not appear to be suited for how Zagzebski deploys the concept in other areas of the book. For instance, regarding her treatment of peer disagreement, Zagzebski claims that each party should resolve the dissonance in a way that favors what they trust most when thinking conscientiously about the matter. The conscientiousness in play here sounds quite subjective, since rational resolution is simply a matter of sticking with what one trusts the most (even if an ideal rational agent wouldn’t be placing their trust in the same states and even when presented evidence to the contrary).

Reasons

Zagzebski distinguishes between 1st and 3rd person reasons, in part, to include things like emotions as reasons. For Zagzebski,

“1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category)” (57)

We are troubled by the way that Zagzebski employs this distinction. First, it is not clear how these two kinds of reasons are related. Does a subject have a 1st person reason for every 3rd person reason? After all, not every proposition that is logically or probabilistically connected to the truth of a proposition is part of an individuals evidence or is one of their reasons. So, are the 3rd person reasons that one possesses reasons that one has access to by way of a first-person reason? How could a 3rd person reason be a reason that I have if not by way of some subjective connection?

The relation between these two kinds of reasons deserves further development since Zagzebski puts this distinction to a great deal of work in the book. The second issue results from Zagzebski’s claim that, “1st person and 3rd person reasons do not aggregate.” (57)  If 1st and 3rd person reasons do not aggregate, then they do not combine to give a verdict as to what one has all-things-considered reason to believe. This poses a significant problem in cases where one’s 1st and 3rd person reasons point in different directions.

Zagzebski’s focus is on one’s 1st person reasons, but what then of one’s 3rd person reasons? 3rd person reasons are still reasons, yet if they do not aggregate with 1st person reasons, and 1st person reasons are determining what one should believe, it’s hard to see what work is left for 3rd person reasons. This is quite striking since these are the very reasons epistemologists have focused on for centuries.

Zagzebski’s embrace of 1st person reasons is ostensibly a movement to integrate the concepts of rationality and truth with resolutely human faculties (e.g. emotion, belief, and sense-perception) that have largely been ignored by the Western philosophical canon. Her critical attitude toward Western hyper-intellectualism and the rationalist worldview is understandable and, in certain ways, admirable. Perhaps the movement to engage emotion, belief, and sense-perception as epistemic features can be preserved, but only in the broader context of an evidence-centered epistemology. Further research should channel this movement toward an examination of how non-traditional epistemic faculties as 1st person reasons may be mapped to 3rd person reasons in a way is cognizant of self-trust in personal experience —that is, an account of aggregation that is grounded fundamentally in evidence.

Biases

In the final part of her response, Zagzebski claims that the insight regarding prejudice within communities can bolster several of her points. She refers specifically to her argument that epistemic self-trust commits us to epistemic trust in others (and its expansion to communities), as well as her argument about communal epistemic egoism and the Rational Recognition Principle. She emphasizes the importance of communities to regard others as trustworthy and rational, which would lead to the recognition of biases within them—something that would not happen if communities relied on epistemic egoism.

However, biases have staying power beyond egoism. Even those who are interested in widening and deepening their perspective though engaging with others can nevertheless have deep biases that affect how they integrate this information. Although Zagzebski may be right in emphasizing the importance of communities to act in this way, it seems too idealistic to imply that such honest engagement would result in the recognition and correction of biases. While such engagement might highlight important disagreements, Zagzebski’s analysis of disagreement, where it is rational to stick with what you trust most, will far too often be an open invitation to maintain (if not reinforce) one’s own biases and prejudice.

It is also important to note that the worry concerning biases and prejudice cannot be resolved by emphasizing a move to communities given that communities are subject to the same biases and prejudices as individuals that compose them. Individuals, in trusting their own communities, will only reinforce the biases and prejudice of its members. So, this move can make things worse, even if sometimes it can make things better. Zagzebski’s expansion of self-trust to communities and her Rational Recognition Principle commits communities only to recognize others as (prima facie) trustworthy and rational by means of recognizing their own epistemic faculties in those others.

However, doing this does not do much in terms of the disclosure of biases given that communities are not committed to trust the beliefs of those they recognize as rational and trustworthy. Under Zagzebski’s view, it is possible for a community to recognize another as rational and trustworthy, without necessarily trusting their beliefs—all without the need to succumb to communal epistemic egoism. Communities are, then, able to treat disagreement in a way that resolves dissonance for them.

That is, by trusting their beliefs more than those of the other communities. This is so even when recognizing them as rational and trustworthy as themselves because, under Zagzebski’s view communities are justified in maintaining their beliefs over those of others not because of egoistic reasons but because by withstanding conscientious self-reflection, they trust their beliefs more than those of others. Resolving dissonance from disagreement in this way is clearly more detrimental than it is beneficial, especially in the cases of biased individuals and communities, for which this would lead them to keep their biases.

Although, as Zagzebski claims, attention to cases of prejudice within communities may help give more importance to her argument about the extension of self-trust to the communal level, it does not do much in terms of disclosing biases inasmuch as dissonance from disagreement is resolved in the way she proposes. Her proposal leads not to the disclosure of biases as she implies, but to their reinforcement given that biases—although plausibly unaware—is what communities and individuals would trust more in these cases.

Contact details: jonathan.matheson@gmail.com

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 56-59.

Author Information: Linda T. Zagzebski, University of Oklahoma, lzagzebski@ou.edu

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.Social Epistemology Review and Reply Collective 6, no. 10 (2017):56-59.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3MA

Please refer to:

Image credit: Oxford Univerity Press

Many thanks to Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson (2017) for your extensive review of Epistemic Authority (2015). I have never seen a work by four philosophers working together, and I appreciate the collaboration it must have taken for you to produce it. I learned from it and hope that I can be a help to you and the readers of SERRC.

What is Inside and What is Outside

I would like to begin by summarizing the view of the mind I am using, which I hope will clarify the central place of conscientious self-reflection in my book, and the way that connects with reasons. I am using a modern view of the mind in which the mind has a boundary.[1] There is a difference between what is inside and what is outside. The mind has faculties that naturally aim at making our mental states fit the outside world in characteristic ways. Perceptual faculties, epistemic faculties, and emotional faculties all do that. They may do so successfully or they may not. So perceptions can be veridical or non-veridical; beliefs can be true or false; emotions can fit or not fit their intentional objects. This view of the mind leads to a generalization of the problem of epistemic circularity: we have no way of telling that any conscious state fits an external object without referring to other conscious states whose fittingness we can also question—hence, the need for self-trust. But we do have a way to detect that something is wrong with our connection to the outside world; that we have a mistaken perceptual state, a false belief, an inappropriate or exaggerated emotion, a skewed value, etc, by the experience of dissonance among our states.

For instance, a belief state might clash with a memory or a perceptual state or a belief acquired by testimony, or the cognitive component of an emotional state. Some dissonance is resolved immediately and without reflection, as when I give up my seeming memory of turning off the sprinkler system when I hear the sprinklers come on, but often dissonance cannot be resolved without awareness of the conflicting states and reflection upon them. Since the mind cannot get beyond its own boundary, all we can do is (a) trust that our faculties are generally reliable in the way they connect us to the outside world, and (b) attempt to use them the best way we can to reach their objects. That is what I call “conscientiousness.” I define epistemic conscientiousness as using our faculties in the best way we can to get the truth (48). Ultimately, our only test that any conscious state fits its object is that it survives conscientious reflection upon our total set of conscious states, now and in the future.

The authors raise the objection that my account is not sufficiently truth- centered because there is more than one way to resolve dissonance. That is, of course, true. The issue for a particular person is finding the most conscientious way to resolve the conflict, a question that sometimes has a plain answer and sometimes does not. The authors give the example of a father who cannot bring himself to believe that his son was killed in war even though he has been given a substantial body of evidence of his son’s death. It is possible for the man to restore harmony in his psyche by abandoning any states that conflict with his belief that his son is alive. Why do we think it is not rational for him to do that? Because we are told that his own faculties are giving him overwhelming evidence that his son is dead, and presumably his faculties will continue to do so forever. His son will never return. If he is to continue believing his son is alive, he has to continuously deny what he is told by sources he has always trusted, which means he has to continuously fabricate reasons why the sources are no longer trustworthy and are compounding their mistakes, and why new sources are also mistaken. If some of his reasons are sensory, he may even have to deny the evidence of his senses. That means that he is not epistemically conscientious as I have defined it because he is not trying to make his belief about his son true. Instead, he is trying to maintain the belief come what may. But we are told that it is psychologically impossible for him to recognize that his son has died. If that is true, then it is psychologically impossible for him to be epistemically conscientious, and hence rational. I would not deny that such a thing can happen, but in that case there is nothing more to be said.

The Nature of Reasons

This leads to my view on the nature of reasons. Why do we say that the father has many reasons to believe his son is dead, in fact, so many that if he is rational, he will give up the belief that his son still lives? We say that because we know what conscientious people do when given detailed and repeated testimony by sources whose trustworthiness has survived all of their past conscientious reflection and with no contrary evidence. To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.

Self-trust is more basic than reasons because we do not have any reason to think that what we call reasons do in fact indicate the truth without self-trust. (Chap 2, sec.5). Self-trust is a condition for what we call a reason to be in fact an indicator of truth. That means that contrary to what the authors maintain, a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p. There can, however, be cases in which it is not clear which way the balance of reasons go, and I discuss some of those cases in Chapter 10 on disagreement. Particularly difficult to judge are the cases in which some of the reasons are emotions.

The fact that emotions can be reasons brings up the distinction between 1st person and 3rd person reasons, which I introduce in Chapter 3, and discuss again in chapters 5, 6, and 10. (The authors do not mention this distinction). What I call 1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category). 3rd person reasons can be laid out on the table for anybody to consider. I say that 1st person and 3rd person reasons do not aggregate. They cannot be put together to give a verdict on the balance of reasons in a particular case independent of the way they are treated by the person who is conscientiously reflecting. The distinction between the two kinds of reasons is important for more than one purpose in the book. I use the distinction to show that 1st person reasons broaden the range of reasons considerably, including states of emotion, belief, perception, intuition, and memory.

A conscientiously self-reflective person can treat any of these states as indicators of the truth of some proposition. We think that we access 3rd person reasons because of our trust in ourselves when we are conscientious. And we do access 3rd person reasons provided that we are in fact trustworthy. This distinction is important in cases of reasonable disagreement because two parties to a disagreement share some of their 3rd person reasons, but they will never share their 1st person reasons. The fact that each party has certain 1st person reasons is a 3rd person reason, but that fact will never have the same function in deliberation as 1st person reasons, and we would not want it to do so.

The authors raise some questions about the way we treat our reasons when they are pre-empted by the belief or testimony of an authority. What happens to the reasons that are pre-empted? Using pre-emption in Raz’s sense, I say that they do not disappear and they are not ignored. They continue to be reasons for many beliefs.  Pre-emption applies to the limited scope of the authority’s authority. When I judge that A is more likely to get the truth whether p than I am, then A’s testimony whether p replaces my independent reasons for and against p. But my reasons for and against p are still beliefs, and they operate as reasons for many beliefs outside the scope of cases in which I judge that A is an authority. Pre-emption also does not assume that I control whether or not I pre-empt. It is rational to pre-empt when I reasonably judge that A satisfies the justification thesis. If I am unable to pre-empt, then I am unable to be rational. In general, I think that we have quite a bit of control over the cases in which we pre-empt, but the theory does not require it. As I said about the case of the father whose son died in a war, I do not assume that we can always be rational.[2]

On Our Biases

The authors also bring up the interesting problem of biases in ourselves or in our communities. A prejudiced person often does not notice her prejudices even when she is reflecting as carefully as she can, and her trust in her community can make the situation worse since the community can easily support her prejudices and might even be the source of them. This is an important insight, and I think it can bolster several points I make in the book. For one thing, cases of bias or prejudice make it all the more important that we have trust in others whose experience widens and deepens our own and helps us to identify unrecognized false beliefs and distorted feelings, and it makes particularly vivid the connection between emotion and belief and the way critical reflection on our emotions can change beliefs for the better.

My argument in Chapter 3 that epistemic self-trust commits us to epistemic trust in others, and the parallel argument in Chapter 4 that emotional self-trust commits us to emotional trust in others would be improved by attention to these cases. The problem of prejudice in communities can also support my argument in Chapter 10, section 4 that what I call communal epistemic egoism is false. I argue that communities are rationally required to think of other communities the same way individuals are rationally required to think of other individuals. Just as self-trust commits me to trust in others, communal self-trust commits a community to trust in other communities. Since biases are most commonly revealed by responses outside the community, it is a serious problem if communities succumb to communal egoism.

In the last section of Chapter 10 I propose some principles of rationality that are intended to show some consequences of the falsehood of communal egoism. One is the Rational Recognition Principle: If a community’s belief is rational, its rationality is recognizable, in principle, by rational persons in other communities. Once we admit that rationality is a quality we have as human beings, not as members of a particular community, we are forced to recognize that the way we are seen from the outside is prima facie trustworthy, and although we may conscientiously reject it, we need reasons to do so. It is our own conscientiousness that requires us to reflect on ourselves with external eyes. A very wide range of trust in others is entailed by self-trust. That is one of the main theses of the book.

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

[1] It is not mandatory to think of the mind this way, although it is the most common view in the modern period. I am working on the difference between this approach and the more open view of the mind that dominated before the modern era in my project, The Two Greatest Ideas, Soochow Lectures, 2018.

[2] Christoph Jaeger offers extended objections to my view of pre-emption and I reply in Episteme, April 2016. That issue also includes an interesting paper by Elizabeth Fricker on my book and my reply. See European Journal for Philosophy of Religion, Dec. 2014, which contains twelve papers on Epistemic Authority and my replies, including several that give special attention to pre-emption.