Archives For folk philosophy of science

Author Information: Nuria Anaya-Reig, Universidad Rey Juan Carlos, nuria.anaya@urjc.es

Anaya-Reig, Nuria. “Teorías Implícitas del Investigador: Un Campo por Explorar Desde la Psicología de la Ciencia.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 36-41.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-434

Image by Joan via Flickr / Creative Commons

 

This article is a Spanish-language version of Nuria Anaya-Reig’s earlier contribution, written by the author herself:

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

¿Qué concepciones tienen los investigadores sobre las características que debe reunir un estudiante para ser considerado un potencial buen científico? ¿En qué medida influyen esas creencias en la selección de candidatos? Estas son las preguntas fundamentales que laten en el trabajo de Caitlin Donahue Wylie (2018). Mediante un estudio cualitativo de tipo etnográfico, se entrevista a dos profesores de ingeniería en calidad de investigadores principales (IP) y a estudiantes de sendos grupos de doctorado, la mayoría graduados, como investigadores noveles. En total, la muestra es de 27 personas.

Los resultados apuntan a que, entre este tipo de investigadores, es común creer que el interés, la asertividad y el entusiasmo por lo que se estudia son indicadores de un futuro buen investigador. Además, los entrevistados consideran que el entusiasmo está relacionado con el deseo de aprender y la ética en el trabajo. Finalmente, se sugiere una posible exclusión no intencional en la selección de investigadores a causa de la aplicación involuntaria de sesgos por parte del IP, relativa a la preferencia de características propias de grupos mayoritarios (tales como etnia, religión o sexo), y se proponen algunas ideas para ayudar a minimizarlos.

Teorías Implícitas en los Sótanos de la Investigación

En esencia, el trabajo de Wylie (2018) muestra que el proceso de selección de nuevos investigadores por parte de científicos experimentados se basa en teorías implícitas. Quizás a simple vista puede parecer una aportación modesta, pero la médula del trabajo es sustanciosa y no carece de interés para la Psicología de la Ciencia, al menos por tres razones.

Para empezar, porque estudiar tales cuestiones constituye otra forma de aproximarse a la compresión de la psique científica desde un ángulo distinto, ya que estudiar la psicología del científico es uno de los ámbitos de estudio centrales de esta subdisciplina (Feist 2006). En segundo término, porque, aunque la pregunta de investigación se ocupa de una cuestión bien conocida por la Psicología social y, en consecuencia, aunque los resultados del estudio sean bastante previsibles, no dejan de ser nuevos datos y, por tanto, valiosos, que enriquecen el conocimiento teórico sobre las ideas implícitas: es básico en ciencia, y propio del razonamiento científico, diferenciar teorías de pruebas (Feist 2006).

En último lugar, porque la Psicología de la Ciencia, en su vertiente aplicada, no puede ignorar el hecho de que las creencias implícitas de los científicos, si son erróneas, pueden tener su consiguiente reflejo negativo en la población de investigadores actual y futura (Wylie 2018).

Ya Santiago Ramón y Cajal, en su faceta como psicólogo de la ciencia (Anaya-Reig and Romo 2017), reflexionaba sobre este asunto hace más de un siglo. En el capítulo IX, “El investigador como maestro”, de su obra Reglas y consejos sobre investigación científica (1920) apuntaba:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

A Vueltas con las Teorías Implícitas

Recordemos brevemente que las teorías ingenuas o implícitas son creencias estables y organizadas que las personas hemos elaborado intuitivamente, sin el rigor del método científico. La mayoría de las veces se accede a su contenido con mucha dificultad, ya que la gente desconoce que las tiene, de ahí su nombre. Este hecho no solo dificulta una modificación del pensamiento, sino que lleva a buscar datos que confirmen lo que se piensa, es decir, a cometer sesgos confirmatorios (Romo 1997).

Las personas vamos identificando y organizando las regularidades del entorno gracias al aprendizaje implícito o incidental, basado en el aprendizaje asociativo, pues necesitamos adaptarnos a las distintas situaciones a las que nos enfrentamos. Elaboramos teorías ingenuas que nos ayuden a comprender, anticipar y manejar de la mejor manera posible las variadas circunstancias que nos rodean. Vivimos rodeados de una cantidad de información tan abrumadora, que elaborar teorías implícitas, aprendiendo qué elementos tienden a presentarse juntos, constituye una forma muy eficaz de hacer el mundo mucho más predecible y controlable, lo que, naturalmente, incluye el comportamiento humano.

De hecho, el contenido de las teorías implícitas es fundamentalmente de naturaleza social (Wegner and Vallacher 1977), como muestra el hecho de que buena parte de ellas pueden agruparse dentro las llamadas Teorías Implícitas de la Personalidad (TIP), categoría a la que, por cierto, bien pueden adscribirse las creencias de los investigadores que nos ocupan.

Las TIP se llaman así porque su contenido versa básicamente sobre cualidades personales o rasgos de personalidad y son, por definición, idiosincráticas, si bien suele existir cierta coincidencia entre los miembros de un mismo grupo social.

Entendidas de modo amplio, pueden definirse como aquellas creencias que cada persona tiene sobre el ser humano en general; por ejemplo, pensar que el hombre es bueno por naturaleza o todo lo contrario. En su acepción específica, las TIP se refieren a las creencias que tenemos sobre las características personales que suelen presentarse juntas en gente concreta. Por ejemplo, con frecuencia presuponemos que un escritor tiene que ser una persona culta, sensible y bohemia (Moya 1996).

Conviene notar también que las teorías implícitas se caracterizan frente a las científicas por ser incoherentes y específicas, por basarse en una causalidad lineal y simple, por componerse de ideas habitualmente poco interconectadas, por buscar solo la verificación y la utilidad. Sin embargo, no tienen por qué ser necesariamente erróneas ni inservibles (Pozo, Rey, Sanz and Limón 1992). Aunque las teorías implícitas tengan una capacidad explicativa limitada, sí tienen capacidad descriptiva y predictiva (Pozo Municio 1996).

Algunas Reflexiones Sobre el Tema

Científicos guiándose por intuiciones, ¿cómo es posible? Pero, ¿por qué no? ¿Por qué los investigadores habrían de comportarse de un modo distinto al de otras personas en los procesos de selección? Se comportan como lo hacemos todos habitualmente en nuestra vida cotidiana con respecto a los más variados asuntos. Otra manera de proceder resultaría para cualquiera no solo poco rentable, en términos cognitivos, sino costoso y agotador.

A fin de cuentas, los investigadores, por muy científicos que sean, no dejan de ser personas y, como tales, buscan intuitivamente respuestas a problemas que, si bien condicionan de modo determinante los resultados de su labor, no son el objeto en sí mismo de su trabajo.

Por otra parte, tampoco debe sorprender que diferentes investigadores, poco o muy experimentados, compartan idénticas creencias, especialmente si pertenecen al mismo ámbito, pues, según se ha apuntado, aunque las teorías implícitas se manifiestan en opiniones o expectativas personales, parte de su contenido tácito es compartido por numerosas personas (Runco 2011).

Todo esto lleva, a su vez, a hacer algunas otras observaciones sobre el trabajo de Wylie (2018). En primer lugar, tratándose de teorías implícitas, más que sugerir que los investigadores pueden estar guiando su selección por un sesgo perceptivo, habría que afirmarlo. Como se ha apuntado, las teorías implícitas operan con sesgos confirmatorios que, de hecho, van robusteciendo sus contenidos.

Otra cuestión es preguntarse con qué guarda relación dicho sesgo: Wylie (2018) sugiere que está relacionado con una posible preferencia por las características propias de los grupos mayoritarios a los que pertenecen los IP basándose en algunos estudios que han mostrado que en ciencia e ingeniería predominan hombres, de raza blanca y de clase media, lo que puede contribuir a recibir mal a aquellos estudiantes que no se ajusten a estos estándares o que incluso ellos mismos abandonen por no sentirse cómodos.

Sin duda, esa es una posible interpretación; pero otra es que el sesgo confirmatorio que muestran estos ingenieros podría deberse a que han observado esos rasgos las personas que han  llegado a ser buenas en su disciplina, en lugar de estar relacionado con su preferencia por interactuar con personas que se parecen física o culturalmente a ellos.

Es oportuno señalar aquí nuevamente que las teorías implícitas no tienen por qué ser necesariamente erróneas, ni inservibles (Pozo, Rey, Sanz and Limón 1992). Es lo que ocurre con parte de las creencias que muestra este grupo de investigadores: ¿acaso los científicos, en especial los mejores, no son apasionados de su trabajo?, ¿no dedican muchas horas y mucho esfuerzo a sacarlo adelante?, ¿no son asertivos? La investigación ha establecido firmemente (Romo 2008) que todos los científicos creativos muestran sin excepción altas dosis de motivación intrínseca por la labor que realizan.

Del mismo modo, desde Hayes (1981) sabemos que se precisa una media de 10 años para dominar una disciplina y lograr algo extraordinario. También se ha observado que muestran una gran autoconfianza y que son espacialmente arrogantes y hostiles. Es más, se sabe que los científicos, en comparación con los no científicos, no solo son más asertivos, sino más dominantes, más seguros de sí mismos, más autónomos e incluso más hostiles (Feist 2006). Varios trabajos, por ejemplo, el de Feist y Gorman (1998), han concluido que existen diferencias en los rasgos de personalidad entre científicos y no científicos.

Pero, por otro lado, esto tampoco significa que las concepciones implícitas de la gente sean necesariamente acertadas. De hecho, muchas veces son erróneas. Un buen ejemplo de ello es la creencia que guía a los investigadores principales estudiados por Wylie para seleccionar a los graduados en relación con sus calificaciones académicas. Aunque dicen que las notas son un indicador insuficiente, a continuación matizan su afirmación: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

Sin embargo, la evidencia empírica muestra que ni las puntuaciones altas en grados ni en pruebas de aptitud predicen necesariamente el éxito en carreras científicas (Feist 2006) y que el genio creativo no está tampoco necesariamente asociado con el rendimiento escolar extraordinario y, lo que es más, numerosos genios han sido estudiantes mediocres (Simonton 2006).

Conclusión

La Psicología de la Ciencia va acumulando datos para orientar en la selección de posibles buenos investigadores a los científicos interesados: véanse, por ejemplo, Feist (2006) o Anaya-Reig (2018). Pero, ciertamente, a nivel práctico, estos conocimientos serán poco útiles si aquellos que más partido pueden sacarles siguen anclados a creencias que pueden ser erróneas.

Por tanto, resulta de interés seguir explorando las teorías implícitas de los investigadores en sus diferentes disciplinas. Su explicitación es imprescindible como paso inicial, tanto para la Psicología de la Ciencia si pretende que ese conocimiento cierto acumulado tenga repercusiones reales en los laboratorios y otros centros de investigación, como para aquellos científicos que deseen adquirir un conocimiento riguroso sobre las cualidades propias del buen investigador.

Todo ello teniendo muy presente que la naturaleza implícita de las creencias personales dificulta el proceso, porque, como se ha señalado, supone que el sujeto entrevistado desconoce a menudo que las posee (Pozo, Rey, Sanz and Limón 1992), y que su modificación requiere, además, un cambio de naturaleza conceptual o representacional (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Por último, tal vez no sea razonable promover entre todos los universitarios de manera general ciertas habilidades, sin tener en consideración que reúnen determinados atributos. Por obvio que sea, hay que recordar que los recursos educativos, como los de cualquier tipo, son necesariamente limitados. Si, además, sabemos que solo un 2% de las personas se dedican a la ciencia (Feist 2006), quizás valga más la pena poner el esfuerzo en mejorar la capacidad de identificar con tino a aquellos que potencialmente son válidos. Otra cosa sería como tratar de entrenar para cantar ópera a una persona que no tiene cualidades vocales en absoluto.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Nuria Anaya-Reig, Rey Juan Carlos University, nuria.anaya@urjc.es.

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42K

From the 2014 White House Science Fair.
Image by NASA HQ Photo via Flickr / Creative Commons

 

This essay is in reply to:

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

What traits in a student do researchers believe characterize a good future scientist? To what degree do these beliefs influence the selection of candidates? These are fundamental questions that resonate in the work of Caitlin Donahue Wylie (2018). As part of a qualitative ethnographic study, an interview was given to two engineering professors working as principal investigators (PIs), as well as to their respective groups of graduate students, most of whom were already working as new researchers. The total sample consisted of 27 people.

Results indicate that, among this class of researchers, interest, assertiveness, and enthusiasm for one’s own field of study are commonly regarded as key signs of a good future researcher. Moreover, the interviewees believe enthusiasm to be related to a desire to learn and a strong work ethic. Lastly, the research suggests that possible, unintentional exclusions may occur during candidate selection due to biases on the part of the PIs, reflecting preferences for features belonging to majority groups (such as ethnicity, religion and gender). This essay offers some ideas that may help minimize such biases.

Implicit Theories Undergirding Research

Essentially, the work of Wylie (2018) demonstrates that experienced scientists base their selection process for new researchers on implicit theories. While this may at first appear to be a rather modest contribution, the core of Wylie’s research is substantial and of great relevance to the psychology of science for at least three reasons.

First, studying such matters offers different angle from which to investigate and attempt to understand the scientific psyche: studying the psychology of scientists is one of the central areas of research in this subdiscipline (Feist 2006). Second, although the research question addresses a well-known issue in social psychology and the results of the study are thus quite predictable, the latter nevertheless constitute new data and are therefore valuable in their own right. Indeed, they enrich theoretical knowledge about implicit ideas given that, in science and scientific reasoning, it is essential to differentiate between tests and theories (Feist 2006).

Finally, because in the way it is currently being applied, the psychology of science cannot turn a blind eye to the fact that if scientists’ implicit beliefs are mistaken, those beliefs may have negative repercussions for the population of current and future researchers (Wylie 2018).

In his role as psychologist of science (Anaya-Reig and Romo 2017), Ramón y Cajal mused upon this issue over a century ago. In “The Investigator as Teacher,” chapter IX of his work Reglas y consejos sobre investigación científica (1920), he noted:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

[What signs identify creative talent and an irrevocable calling for scientific research?]

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

[This serious and fundamentally important question has been discussed at length by deep thinkers and noted teachers, without coming to any real conclusions. The problem is even more difficult when taking into account the fact that it is not enough to find capable and clear-sighted and capable minds for laboratory research; they must also be genuine converts to the worship of original data.]

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

[Are future scientists—the goal of our educational vigilance—found by chance among the most serious students who work diligently, those who win prizes and competitions?]

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

[Sometimes, but not always. If the rule were infallible, the teacher’s work would be easy. He could simply focus his efforts on the outstanding prizewinners among the degree candidates, and on those at the top of the list in professional competitions. But reality often takes pleasure in laughing at predictions and in blasting hopes. (Ramón y Cajal 1999, 141)]

Returning to Implicit Theories

Let us briefly recall that naïve or implicit theories are stable and organized beliefs that people have formed intuitively, without the rigor of the scientific method; their content can be accessed only with great difficulty, given that people are unaware that they have them. This makes not only modifying them difficult but also leads those who possess them to search for facts that confirm what they already believe or, in other words, to fall prey to confirmation bias (Romo 1997).

People tend to identify and organize regularities in their environment thanks to implicit or incidental learning, which is based on associative learning, due to the need to adapt to the varying situations with which we are faced. We formulate naïve theories that help us comprehend, anticipate and deal with the disparate situations confronting us in the best way possible. Indeed, we are surrounded by a such an overwhelming amount of information that formulating implicit theories, learning which things seem to appear together at the same time, is a very effective way of making the world more predictable and controllable.

Naturally, human behavior is no exception to this rule. In fact, the content of implicit theories is fundamentally of a social nature (Wegner and Vallacher 1977), as is revealed by the fact that a good portion of such theories take the form of so-called Implicit Personality Theories (IPT), a category to which the beliefs of the researchers under consideration here also belong.

IPTs get their name because their content consists of personal qualities or personality traits. They are idiosyncratic, even if there indeed are certain coincidences among members of the same social group.

Understood broadly, IPTs can be defined as those beliefs that everyone has about human beings in general; for example, that man is by nature good, or just the opposite. Defined more precisely, IPTs refer to those beliefs that we have about the personal characteristics of specific types of people. For example, we frequently assume that a writer need be a cultured, sensitive and bohemian sort of person (Moya 1996).

It should be noted that implicit theories, in contrast to those of a scientific nature, are also characterized by their specificity and incoherence, given that they are based on simple, linear coincidences, are composed of ideas that are habitually interconnected, and seek only verification and utility. Still, this does not necessarily mean that such ideas are necessarily mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Although implicit theories have a limited explanatory power, they do have descriptive and predictive capacities (Pozo Municio 1996).

Some Reflections on the Subject

Scientists being led by their intuitions…what is going on? Then again, what is wrong with that? Why must researchers behave differently from other people when engaged in selection processes? Scientists behave as we all do in our daily lives when it comes to all sorts of things. Any other way of proceeding would not just be unprofitable but also would be, in cognitive terms, costly and exhausting.

All things considered, researchers, no matter how rigorously scientific they may be, are still people and as such intuitively seek out answers to problems which influence their labor in specific ways while not in themselves being the goal of their work.

Moreover, we should not be surprised either when different researchers, whether novice or seasoned, share identical beliefs, especially if they work within the same field, since, as noted above, although implicit theories reveal themselves in opinions or personal expectations, part of their tacit content is shared by many people (Runco 2011).

The above leads one, in turn, to make further observations about the work of Wylie (2018). In the first place, as for implicit theories, rather than simply suggesting that researchers’ selections may be guided by a perceptual bias, it must be affirmed that this indeed is the case. As has been noted, implicit theories operate with confirmation biases which in fact reinforce their content.

Another matter is what sorts of biases these are: Wylie (2018) suggests that they often take the form of a possible preference for certain features that are characteristic of the majority groups to which the PIs belong, a conclusion based on several studies showing that white, middle-class men predominate in the fields of science and engineering, which may cause them to react poorly to students who do not meet those standards and indeed may even lead to the latter giving up because of the discomfort they feel in such environments.

This is certainly one possible interpretation; another is that the confirmation bias exhibited by these researchers might arise because they have observed such traits in people who have achieved excellence in their field and therefore may not, in fact, be the result of a preference for interacting with people who resemble them physical or culturally.

It is worth noting here that implicit theories need not be mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Indeed, this is certainly true for the beliefs held by the group of researchers. Aren’t scientists, especially the best among them, passionate about their work? Do they not dedicate many hours to it and put a great deal of effort into carrying it out? Are they not assertive? Research has conclusively shown (Romo 2008) that all creative scientists, without exception, exhibit high levels of intrinsic motivation when it comes to the work that they do.

Similarly, since Hayes (1981) we have known that it takes an average of ten years to master a discipline and achieve something notable within it. It has also been observed that researchers exhibit high levels of self-confidence and tend to be arrogant and aggressive. Indeed, it is known that scientists, as compared to non-scientists, are not only more assertive but also more domineering, more self-assured, more self-reliant and even more hostile (Feist 2006). Several studies, like that of Feist and Gorman (1998) for example, have concluded that there are differences in personality traits between scientists and non-scientists.

On the other hand, this does not mean that people’s implicit ideas are necessarily correct. In fact, they are often mistaken. A good example of this is one belief that guided those researchers studied by Wylie as they selected graduates according to their academic credentials. Although they claimed that grades were an insufficient indicator, they then went on to qualify that claim: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

However, the empirical evidence shows that neither high grades nor high scores on aptitude tests are reliable predictors of a successful scientific career (Feist 2006). The evidence also suggests that creative genius is not necessarily associated with academic performance. Indeed, many geniuses were mediocre students (Simonton 2006).

Conclusion

The psychology of science continues to amass data to help orient the selection of potentially good researchers for those scientists interested in recruiting them: see, for example Feist (2006) or Anaya-Reig (2018). At the practical level, however, this knowledge will be of little use if those who are best able to benefit from it continue to cling to beliefs that may be mistaken.

Therefore, it is of great interest to keep exploring the implicit theories held by researchers in different disciplines. Making them explicit is an essential first step both for the psychology of science, if that discipline’s body of knowledge is to have practical repercussions in laboratories as well as other research centers, as well as for those scientists who wish to acquire rigorous knowledge about what inherent qualities make a good researcher, all while keeping in mind that the implicit nature of personal beliefs makes such a process difficult.

As noted above, subjects who are interviewed are often unaware that they possess them (Pozo, Rey, Sanz and Limón 1992). Moreover, modifying them requires a change of a conceptual or representational nature (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Lastly, it may perhaps be unreasonable to promote certain skills among university students in general without considering the aptitudes necessary for acquiring them. Although it may be obvious, it should be remembered that educational resources, like those of all types, are necessarily limited. Since we know that only 2% of the population devotes itself to science (Feist 2006), it may very well be more worthwhile to work on improving our ability to target those students who have potential. Anything else would be like trying to train a person who has no vocal talent whatsoever to sing opera.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).