Archives For psychology of science

Author Information: Nuria Anaya-Reig, Universidad Rey Juan Carlos, nuria.anaya@urjc.es

Anaya-Reig, Nuria. “Teorías Implícitas del Investigador: Un Campo por Explorar Desde la Psicología de la Ciencia.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 36-41.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-434

Image by Joan via Flickr / Creative Commons

 

This article is a Spanish-language version of Nuria Anaya-Reig’s earlier contribution, written by the author herself:

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

¿Qué concepciones tienen los investigadores sobre las características que debe reunir un estudiante para ser considerado un potencial buen científico? ¿En qué medida influyen esas creencias en la selección de candidatos? Estas son las preguntas fundamentales que laten en el trabajo de Caitlin Donahue Wylie (2018). Mediante un estudio cualitativo de tipo etnográfico, se entrevista a dos profesores de ingeniería en calidad de investigadores principales (IP) y a estudiantes de sendos grupos de doctorado, la mayoría graduados, como investigadores noveles. En total, la muestra es de 27 personas.

Los resultados apuntan a que, entre este tipo de investigadores, es común creer que el interés, la asertividad y el entusiasmo por lo que se estudia son indicadores de un futuro buen investigador. Además, los entrevistados consideran que el entusiasmo está relacionado con el deseo de aprender y la ética en el trabajo. Finalmente, se sugiere una posible exclusión no intencional en la selección de investigadores a causa de la aplicación involuntaria de sesgos por parte del IP, relativa a la preferencia de características propias de grupos mayoritarios (tales como etnia, religión o sexo), y se proponen algunas ideas para ayudar a minimizarlos.

Teorías Implícitas en los Sótanos de la Investigación

En esencia, el trabajo de Wylie (2018) muestra que el proceso de selección de nuevos investigadores por parte de científicos experimentados se basa en teorías implícitas. Quizás a simple vista puede parecer una aportación modesta, pero la médula del trabajo es sustanciosa y no carece de interés para la Psicología de la Ciencia, al menos por tres razones.

Para empezar, porque estudiar tales cuestiones constituye otra forma de aproximarse a la compresión de la psique científica desde un ángulo distinto, ya que estudiar la psicología del científico es uno de los ámbitos de estudio centrales de esta subdisciplina (Feist 2006). En segundo término, porque, aunque la pregunta de investigación se ocupa de una cuestión bien conocida por la Psicología social y, en consecuencia, aunque los resultados del estudio sean bastante previsibles, no dejan de ser nuevos datos y, por tanto, valiosos, que enriquecen el conocimiento teórico sobre las ideas implícitas: es básico en ciencia, y propio del razonamiento científico, diferenciar teorías de pruebas (Feist 2006).

En último lugar, porque la Psicología de la Ciencia, en su vertiente aplicada, no puede ignorar el hecho de que las creencias implícitas de los científicos, si son erróneas, pueden tener su consiguiente reflejo negativo en la población de investigadores actual y futura (Wylie 2018).

Ya Santiago Ramón y Cajal, en su faceta como psicólogo de la ciencia (Anaya-Reig and Romo 2017), reflexionaba sobre este asunto hace más de un siglo. En el capítulo IX, “El investigador como maestro”, de su obra Reglas y consejos sobre investigación científica (1920) apuntaba:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

A Vueltas con las Teorías Implícitas

Recordemos brevemente que las teorías ingenuas o implícitas son creencias estables y organizadas que las personas hemos elaborado intuitivamente, sin el rigor del método científico. La mayoría de las veces se accede a su contenido con mucha dificultad, ya que la gente desconoce que las tiene, de ahí su nombre. Este hecho no solo dificulta una modificación del pensamiento, sino que lleva a buscar datos que confirmen lo que se piensa, es decir, a cometer sesgos confirmatorios (Romo 1997).

Las personas vamos identificando y organizando las regularidades del entorno gracias al aprendizaje implícito o incidental, basado en el aprendizaje asociativo, pues necesitamos adaptarnos a las distintas situaciones a las que nos enfrentamos. Elaboramos teorías ingenuas que nos ayuden a comprender, anticipar y manejar de la mejor manera posible las variadas circunstancias que nos rodean. Vivimos rodeados de una cantidad de información tan abrumadora, que elaborar teorías implícitas, aprendiendo qué elementos tienden a presentarse juntos, constituye una forma muy eficaz de hacer el mundo mucho más predecible y controlable, lo que, naturalmente, incluye el comportamiento humano.

De hecho, el contenido de las teorías implícitas es fundamentalmente de naturaleza social (Wegner and Vallacher 1977), como muestra el hecho de que buena parte de ellas pueden agruparse dentro las llamadas Teorías Implícitas de la Personalidad (TIP), categoría a la que, por cierto, bien pueden adscribirse las creencias de los investigadores que nos ocupan.

Las TIP se llaman así porque su contenido versa básicamente sobre cualidades personales o rasgos de personalidad y son, por definición, idiosincráticas, si bien suele existir cierta coincidencia entre los miembros de un mismo grupo social.

Entendidas de modo amplio, pueden definirse como aquellas creencias que cada persona tiene sobre el ser humano en general; por ejemplo, pensar que el hombre es bueno por naturaleza o todo lo contrario. En su acepción específica, las TIP se refieren a las creencias que tenemos sobre las características personales que suelen presentarse juntas en gente concreta. Por ejemplo, con frecuencia presuponemos que un escritor tiene que ser una persona culta, sensible y bohemia (Moya 1996).

Conviene notar también que las teorías implícitas se caracterizan frente a las científicas por ser incoherentes y específicas, por basarse en una causalidad lineal y simple, por componerse de ideas habitualmente poco interconectadas, por buscar solo la verificación y la utilidad. Sin embargo, no tienen por qué ser necesariamente erróneas ni inservibles (Pozo, Rey, Sanz and Limón 1992). Aunque las teorías implícitas tengan una capacidad explicativa limitada, sí tienen capacidad descriptiva y predictiva (Pozo Municio 1996).

Algunas Reflexiones Sobre el Tema

Científicos guiándose por intuiciones, ¿cómo es posible? Pero, ¿por qué no? ¿Por qué los investigadores habrían de comportarse de un modo distinto al de otras personas en los procesos de selección? Se comportan como lo hacemos todos habitualmente en nuestra vida cotidiana con respecto a los más variados asuntos. Otra manera de proceder resultaría para cualquiera no solo poco rentable, en términos cognitivos, sino costoso y agotador.

A fin de cuentas, los investigadores, por muy científicos que sean, no dejan de ser personas y, como tales, buscan intuitivamente respuestas a problemas que, si bien condicionan de modo determinante los resultados de su labor, no son el objeto en sí mismo de su trabajo.

Por otra parte, tampoco debe sorprender que diferentes investigadores, poco o muy experimentados, compartan idénticas creencias, especialmente si pertenecen al mismo ámbito, pues, según se ha apuntado, aunque las teorías implícitas se manifiestan en opiniones o expectativas personales, parte de su contenido tácito es compartido por numerosas personas (Runco 2011).

Todo esto lleva, a su vez, a hacer algunas otras observaciones sobre el trabajo de Wylie (2018). En primer lugar, tratándose de teorías implícitas, más que sugerir que los investigadores pueden estar guiando su selección por un sesgo perceptivo, habría que afirmarlo. Como se ha apuntado, las teorías implícitas operan con sesgos confirmatorios que, de hecho, van robusteciendo sus contenidos.

Otra cuestión es preguntarse con qué guarda relación dicho sesgo: Wylie (2018) sugiere que está relacionado con una posible preferencia por las características propias de los grupos mayoritarios a los que pertenecen los IP basándose en algunos estudios que han mostrado que en ciencia e ingeniería predominan hombres, de raza blanca y de clase media, lo que puede contribuir a recibir mal a aquellos estudiantes que no se ajusten a estos estándares o que incluso ellos mismos abandonen por no sentirse cómodos.

Sin duda, esa es una posible interpretación; pero otra es que el sesgo confirmatorio que muestran estos ingenieros podría deberse a que han observado esos rasgos las personas que han  llegado a ser buenas en su disciplina, en lugar de estar relacionado con su preferencia por interactuar con personas que se parecen física o culturalmente a ellos.

Es oportuno señalar aquí nuevamente que las teorías implícitas no tienen por qué ser necesariamente erróneas, ni inservibles (Pozo, Rey, Sanz and Limón 1992). Es lo que ocurre con parte de las creencias que muestra este grupo de investigadores: ¿acaso los científicos, en especial los mejores, no son apasionados de su trabajo?, ¿no dedican muchas horas y mucho esfuerzo a sacarlo adelante?, ¿no son asertivos? La investigación ha establecido firmemente (Romo 2008) que todos los científicos creativos muestran sin excepción altas dosis de motivación intrínseca por la labor que realizan.

Del mismo modo, desde Hayes (1981) sabemos que se precisa una media de 10 años para dominar una disciplina y lograr algo extraordinario. También se ha observado que muestran una gran autoconfianza y que son espacialmente arrogantes y hostiles. Es más, se sabe que los científicos, en comparación con los no científicos, no solo son más asertivos, sino más dominantes, más seguros de sí mismos, más autónomos e incluso más hostiles (Feist 2006). Varios trabajos, por ejemplo, el de Feist y Gorman (1998), han concluido que existen diferencias en los rasgos de personalidad entre científicos y no científicos.

Pero, por otro lado, esto tampoco significa que las concepciones implícitas de la gente sean necesariamente acertadas. De hecho, muchas veces son erróneas. Un buen ejemplo de ello es la creencia que guía a los investigadores principales estudiados por Wylie para seleccionar a los graduados en relación con sus calificaciones académicas. Aunque dicen que las notas son un indicador insuficiente, a continuación matizan su afirmación: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

Sin embargo, la evidencia empírica muestra que ni las puntuaciones altas en grados ni en pruebas de aptitud predicen necesariamente el éxito en carreras científicas (Feist 2006) y que el genio creativo no está tampoco necesariamente asociado con el rendimiento escolar extraordinario y, lo que es más, numerosos genios han sido estudiantes mediocres (Simonton 2006).

Conclusión

La Psicología de la Ciencia va acumulando datos para orientar en la selección de posibles buenos investigadores a los científicos interesados: véanse, por ejemplo, Feist (2006) o Anaya-Reig (2018). Pero, ciertamente, a nivel práctico, estos conocimientos serán poco útiles si aquellos que más partido pueden sacarles siguen anclados a creencias que pueden ser erróneas.

Por tanto, resulta de interés seguir explorando las teorías implícitas de los investigadores en sus diferentes disciplinas. Su explicitación es imprescindible como paso inicial, tanto para la Psicología de la Ciencia si pretende que ese conocimiento cierto acumulado tenga repercusiones reales en los laboratorios y otros centros de investigación, como para aquellos científicos que deseen adquirir un conocimiento riguroso sobre las cualidades propias del buen investigador.

Todo ello teniendo muy presente que la naturaleza implícita de las creencias personales dificulta el proceso, porque, como se ha señalado, supone que el sujeto entrevistado desconoce a menudo que las posee (Pozo, Rey, Sanz and Limón 1992), y que su modificación requiere, además, un cambio de naturaleza conceptual o representacional (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Por último, tal vez no sea razonable promover entre todos los universitarios de manera general ciertas habilidades, sin tener en consideración que reúnen determinados atributos. Por obvio que sea, hay que recordar que los recursos educativos, como los de cualquier tipo, son necesariamente limitados. Si, además, sabemos que solo un 2% de las personas se dedican a la ciencia (Feist 2006), quizás valga más la pena poner el esfuerzo en mejorar la capacidad de identificar con tino a aquellos que potencialmente son válidos. Otra cosa sería como tratar de entrenar para cantar ópera a una persona que no tiene cualidades vocales en absoluto.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Nuria Anaya-Reig, Rey Juan Carlos University, nuria.anaya@urjc.es.

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42K

From the 2014 White House Science Fair.
Image by NASA HQ Photo via Flickr / Creative Commons

 

This essay is in reply to:

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

What traits in a student do researchers believe characterize a good future scientist? To what degree do these beliefs influence the selection of candidates? These are fundamental questions that resonate in the work of Caitlin Donahue Wylie (2018). As part of a qualitative ethnographic study, an interview was given to two engineering professors working as principal investigators (PIs), as well as to their respective groups of graduate students, most of whom were already working as new researchers. The total sample consisted of 27 people.

Results indicate that, among this class of researchers, interest, assertiveness, and enthusiasm for one’s own field of study are commonly regarded as key signs of a good future researcher. Moreover, the interviewees believe enthusiasm to be related to a desire to learn and a strong work ethic. Lastly, the research suggests that possible, unintentional exclusions may occur during candidate selection due to biases on the part of the PIs, reflecting preferences for features belonging to majority groups (such as ethnicity, religion and gender). This essay offers some ideas that may help minimize such biases.

Implicit Theories Undergirding Research

Essentially, the work of Wylie (2018) demonstrates that experienced scientists base their selection process for new researchers on implicit theories. While this may at first appear to be a rather modest contribution, the core of Wylie’s research is substantial and of great relevance to the psychology of science for at least three reasons.

First, studying such matters offers different angle from which to investigate and attempt to understand the scientific psyche: studying the psychology of scientists is one of the central areas of research in this subdiscipline (Feist 2006). Second, although the research question addresses a well-known issue in social psychology and the results of the study are thus quite predictable, the latter nevertheless constitute new data and are therefore valuable in their own right. Indeed, they enrich theoretical knowledge about implicit ideas given that, in science and scientific reasoning, it is essential to differentiate between tests and theories (Feist 2006).

Finally, because in the way it is currently being applied, the psychology of science cannot turn a blind eye to the fact that if scientists’ implicit beliefs are mistaken, those beliefs may have negative repercussions for the population of current and future researchers (Wylie 2018).

In his role as psychologist of science (Anaya-Reig and Romo 2017), Ramón y Cajal mused upon this issue over a century ago. In “The Investigator as Teacher,” chapter IX of his work Reglas y consejos sobre investigación científica (1920), he noted:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

[What signs identify creative talent and an irrevocable calling for scientific research?]

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

[This serious and fundamentally important question has been discussed at length by deep thinkers and noted teachers, without coming to any real conclusions. The problem is even more difficult when taking into account the fact that it is not enough to find capable and clear-sighted and capable minds for laboratory research; they must also be genuine converts to the worship of original data.]

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

[Are future scientists—the goal of our educational vigilance—found by chance among the most serious students who work diligently, those who win prizes and competitions?]

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

[Sometimes, but not always. If the rule were infallible, the teacher’s work would be easy. He could simply focus his efforts on the outstanding prizewinners among the degree candidates, and on those at the top of the list in professional competitions. But reality often takes pleasure in laughing at predictions and in blasting hopes. (Ramón y Cajal 1999, 141)]

Returning to Implicit Theories

Let us briefly recall that naïve or implicit theories are stable and organized beliefs that people have formed intuitively, without the rigor of the scientific method; their content can be accessed only with great difficulty, given that people are unaware that they have them. This makes not only modifying them difficult but also leads those who possess them to search for facts that confirm what they already believe or, in other words, to fall prey to confirmation bias (Romo 1997).

People tend to identify and organize regularities in their environment thanks to implicit or incidental learning, which is based on associative learning, due to the need to adapt to the varying situations with which we are faced. We formulate naïve theories that help us comprehend, anticipate and deal with the disparate situations confronting us in the best way possible. Indeed, we are surrounded by a such an overwhelming amount of information that formulating implicit theories, learning which things seem to appear together at the same time, is a very effective way of making the world more predictable and controllable.

Naturally, human behavior is no exception to this rule. In fact, the content of implicit theories is fundamentally of a social nature (Wegner and Vallacher 1977), as is revealed by the fact that a good portion of such theories take the form of so-called Implicit Personality Theories (IPT), a category to which the beliefs of the researchers under consideration here also belong.

IPTs get their name because their content consists of personal qualities or personality traits. They are idiosyncratic, even if there indeed are certain coincidences among members of the same social group.

Understood broadly, IPTs can be defined as those beliefs that everyone has about human beings in general; for example, that man is by nature good, or just the opposite. Defined more precisely, IPTs refer to those beliefs that we have about the personal characteristics of specific types of people. For example, we frequently assume that a writer need be a cultured, sensitive and bohemian sort of person (Moya 1996).

It should be noted that implicit theories, in contrast to those of a scientific nature, are also characterized by their specificity and incoherence, given that they are based on simple, linear coincidences, are composed of ideas that are habitually interconnected, and seek only verification and utility. Still, this does not necessarily mean that such ideas are necessarily mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Although implicit theories have a limited explanatory power, they do have descriptive and predictive capacities (Pozo Municio 1996).

Some Reflections on the Subject

Scientists being led by their intuitions…what is going on? Then again, what is wrong with that? Why must researchers behave differently from other people when engaged in selection processes? Scientists behave as we all do in our daily lives when it comes to all sorts of things. Any other way of proceeding would not just be unprofitable but also would be, in cognitive terms, costly and exhausting.

All things considered, researchers, no matter how rigorously scientific they may be, are still people and as such intuitively seek out answers to problems which influence their labor in specific ways while not in themselves being the goal of their work.

Moreover, we should not be surprised either when different researchers, whether novice or seasoned, share identical beliefs, especially if they work within the same field, since, as noted above, although implicit theories reveal themselves in opinions or personal expectations, part of their tacit content is shared by many people (Runco 2011).

The above leads one, in turn, to make further observations about the work of Wylie (2018). In the first place, as for implicit theories, rather than simply suggesting that researchers’ selections may be guided by a perceptual bias, it must be affirmed that this indeed is the case. As has been noted, implicit theories operate with confirmation biases which in fact reinforce their content.

Another matter is what sorts of biases these are: Wylie (2018) suggests that they often take the form of a possible preference for certain features that are characteristic of the majority groups to which the PIs belong, a conclusion based on several studies showing that white, middle-class men predominate in the fields of science and engineering, which may cause them to react poorly to students who do not meet those standards and indeed may even lead to the latter giving up because of the discomfort they feel in such environments.

This is certainly one possible interpretation; another is that the confirmation bias exhibited by these researchers might arise because they have observed such traits in people who have achieved excellence in their field and therefore may not, in fact, be the result of a preference for interacting with people who resemble them physical or culturally.

It is worth noting here that implicit theories need not be mistaken or useless (Pozo, Rey, Sanz and Limón 1992). Indeed, this is certainly true for the beliefs held by the group of researchers. Aren’t scientists, especially the best among them, passionate about their work? Do they not dedicate many hours to it and put a great deal of effort into carrying it out? Are they not assertive? Research has conclusively shown (Romo 2008) that all creative scientists, without exception, exhibit high levels of intrinsic motivation when it comes to the work that they do.

Similarly, since Hayes (1981) we have known that it takes an average of ten years to master a discipline and achieve something notable within it. It has also been observed that researchers exhibit high levels of self-confidence and tend to be arrogant and aggressive. Indeed, it is known that scientists, as compared to non-scientists, are not only more assertive but also more domineering, more self-assured, more self-reliant and even more hostile (Feist 2006). Several studies, like that of Feist and Gorman (1998) for example, have concluded that there are differences in personality traits between scientists and non-scientists.

On the other hand, this does not mean that people’s implicit ideas are necessarily correct. In fact, they are often mistaken. A good example of this is one belief that guided those researchers studied by Wylie as they selected graduates according to their academic credentials. Although they claimed that grades were an insufficient indicator, they then went on to qualify that claim: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

However, the empirical evidence shows that neither high grades nor high scores on aptitude tests are reliable predictors of a successful scientific career (Feist 2006). The evidence also suggests that creative genius is not necessarily associated with academic performance. Indeed, many geniuses were mediocre students (Simonton 2006).

Conclusion

The psychology of science continues to amass data to help orient the selection of potentially good researchers for those scientists interested in recruiting them: see, for example Feist (2006) or Anaya-Reig (2018). At the practical level, however, this knowledge will be of little use if those who are best able to benefit from it continue to cling to beliefs that may be mistaken.

Therefore, it is of great interest to keep exploring the implicit theories held by researchers in different disciplines. Making them explicit is an essential first step both for the psychology of science, if that discipline’s body of knowledge is to have practical repercussions in laboratories as well as other research centers, as well as for those scientists who wish to acquire rigorous knowledge about what inherent qualities make a good researcher, all while keeping in mind that the implicit nature of personal beliefs makes such a process difficult.

As noted above, subjects who are interviewed are often unaware that they possess them (Pozo, Rey, Sanz and Limón 1992). Moreover, modifying them requires a change of a conceptual or representational nature (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Lastly, it may perhaps be unreasonable to promote certain skills among university students in general without considering the aptitudes necessary for acquiring them. Although it may be obvious, it should be remembered that educational resources, like those of all types, are necessarily limited. Since we know that only 2% of the population devotes itself to science (Feist 2006), it may very well be more worthwhile to work on improving our ability to target those students who have potential. Anything else would be like trying to train a person who has no vocal talent whatsoever to sing opera.

Contact details: nuria.anaya@urjc.es

References

Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author information: Moti Mizrahi, Florida Institute of Technology, mmizrahi@fit.edu

Mizrahi, Moti. “More in Defense of Weak Scientism: Another Reply to Brown.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 7-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W1

Please refer to:

Image by eltpics via Flickr / Creative Commons

 

In my (2017a), I defend a view I call Weak Scientism, which is the view that knowledge produced by scientific disciplines is better than knowledge produced by non-scientific disciplines.[1] Scientific knowledge can be said to be quantitatively better than non-scientific knowledge insofar as scientific disciplines produce more impactful knowledge–in the form of scholarly publications–than non-scientific disciplines (as measured by research output and research impact). Scientific knowledge can be said to be qualitatively better than non-scientific knowledge insofar as such knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge.

Brown (2017a) raises several objections against my defense of Weak Scientism and I have replied to his objections (Mizrahi 2017b), thereby showing again that Weak Scientism is a defensible view. Since then, Brown (2017b) has reiterated his objections in another reply on SERRC. Almost unchanged from his previous attack on Weak Scientism (Brown 2017a), Brown’s (2017b) objections are the following:

  1. Weak Scientism is not strong enough to count as scientism.
  2. Advocates of Strong Scientism should not endorse Weak Scientism.
  3. Weak Scientism does not show that philosophy is useless.
  4. My defense of Weak Scientism appeals to controversial philosophical assumptions.
  5. My defense of Weak Scientism is a philosophical argument.
  6. There is nothing wrong with persuasive definitions of scientism.

In what follows, I will respond to these objections, thereby showing once more that Weak Scientism is a defensible view. Since I have been asked to keep this as short as possible, however, I will try to focus on what I take to be new in Brown’s (2017b) latest attack on Weak Scientism.

Is Weak Scientism Strong Enough to Count as Scientism?

Brown (2017b) argues for (1) on the grounds that, on Weak Scientism, “philosophical knowledge may be nearly as valuable as scientific knowledge.” Brown (2017b, 4) goes on to characterize a view he labels “Scientism2,” which he admits is the same view as Strong Scientism, and says that “there is a huge logical gap between Strong Scientism (Scientism2) and Weak Scientism.”

As was the case the first time Brown raised this objection, it is not clear how it is supposed to show that Weak Scientism is not “really” a (weaker) version of scientism (Mizrahi 2017b, 10-11). Of course there is a logical gap between Strong Scientism and Weak Scientism; that is why I distinguish between these two epistemological views. If I am right, Strong Scientism is too strong to be a defensible version of scientism, whereas Weak Scientism is a defensible (weaker) version of scientism (Mizrahi 2017a, 353-354).

Of course Weak Scientism “leaves open the possibility that there is philosophical knowledge” (Brown 2017b, 5). If I am right, such philosophical knowledge would be inferior to scientific knowledge both quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) (Mizrahi 2017a, 358).

Brown (2017b, 5) does try to offer a reason “for thinking it strange that Weak Scientism counts as a species of scientism” in his latest attack on Weak Scientism, which does not appear in his previous attack. He invites us to imagine a theist who believes that “modern science is the greatest new intellectual achievement since the fifteenth century” (emphasis in original). Brown then claims that this theist would be an advocate of Weak Scientism because Brown (2017b, 6) takes “modern science is the greatest new intellectual achievement since the fifteenth century” to be “(roughly) equivalent to Weak Scientism.” For Brown (2017b, 6), however, “it seems odd, to say the least, that [this theist] should count as an advocate (even roughly) of scientism.”

Unfortunately, Brown’s appeal to intuition is rather difficult to evaluate because his hypothetical case is under-described.[2] First, the key phrase, namely, “modern science is the greatest new intellectual achievement since the fifteenth century,” is vague in more ways than one. I have no idea what “greatest” is supposed to mean here. Greatest in what respects? What are the other “intellectual achievements” relative to which science is said to be “the greatest”?

Also, what does “intellectual achievement” mean here? There are multiple accounts and literary traditions in history and philosophy of science, science studies, and the like on what counts as “intellectual achievements” or progress in science (Mizrahi 2013b). Without a clear understanding of what these key phrases mean here, it is difficult to tell how Brown’s intuition about this hypothetical case is supposed to be a reason to think that Weak Scientism is not “really” a (weaker) version of scientism.

Toward the end of his discussion of (1), Brown says something that suggests he actually has an issue with the word ‘scientism’. Brown (2017b, 6) writes, “perhaps Mizrahi should coin a new word for the position with respect to scientific knowledge and non-scientific forms of academic knowledge he wants to talk about” (emphasis in original). It should be clear, of course, that it does not matter what label I use for the view that “Of all the knowledge we have, scientific knowledge is the best knowledge” (Mizrahi 2017a, 354; emphasis in original). What matters is the content of the view, not the label.

Whether Brown likes the label or not, Weak Scientism is a (weaker) version of scientism because it is the view that scientific ways of knowing are superior (in certain relevant respects) to non-scientific ways of knowing, whereas Strong Scientism is the view that scientific ways of knowing are the only ways of knowing. As I have pointed out in my previous reply to Brown, whether scientific ways of knowing are superior to non-scientific ways of knowing is essentially what the scientism debate is all about (Mizrahi 2017b, 13).

Before I conclude this discussion of (1), I would like to point out that Brown seems to have misunderstood Weak Scientism. He (2017b, 3) claims that “Weak Scientism is a normative and not a descriptive claim.” This is a mistake. As a thesis (Peels 2017, 11), Weak Scientism is a descriptive claim about scientific knowledge in comparison to non-scientific knowledge. This should be clear provided that we keep in mind what it means to say that scientific knowledge is better than non-scientific knowledge. As I have argued in my (2017a), to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and that the impact of scientific knowledge is greater than that of non-scientific knowledge (as measured by research impact).

To say that scientific knowledge is qualitatively better than non-scientific knowledge is to say that scientific knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge. All these claims about the superiority of scientific knowledge to non-scientific knowledge are descriptive, not normative, claims. That is to say, Weak Scientism is the view that, as a matter of fact, knowledge produced by scientific fields of study is quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) better than knowledge produced by non-scientific fields of study.

Of course, Weak Scientism does have some normative implications. For instance, if scientific knowledge is indeed better than non-scientific knowledge, then, other things being equal, we should give more evidential weight to scientific knowledge than to non-scientific knowledge. For example, suppose that I am considering whether to vaccinate my child or not. On the one hand, I have scientific knowledge in the form of results from clinical trials according to which MMR vaccines are generally safe and effective.

On the other hand, I have knowledge in the form of stories about children who were vaccinated and then began to display symptoms of autism. If Weak Scientism is true, and I want to make a decision based on the best available information, then I should give more evidential weight to the scientific knowledge about MMR vaccines than to the anecdotal knowledge about MMR vaccines simply because the former is scientific (i.e., knowledge obtained by means of the methods of science, such as clinical trials) and the latter is not.

Should Advocates of Strong Scientism Endorse Weak Scientism?

Brown (2017b, 7) argues for (2) on the grounds that “once the advocate of Strong Scientism sees that an advocate of Weak Scientism admits the possibility that there is real knowledge other than what is produced by the natural sciences […] the advocate of Strong Scientism, at least given their philosophical presuppositions, will reject Weak Scientism out of hand.” It is not clear which “philosophical presuppositions” Brown is talking about here. Brown quotes Rosenberg (2011, 20), who claims that physics tells us what reality is like, presumably as an example of a proponent of Strong Scientism who would not endorse Weak Scientism. But it is not clear why Brown thinks that Rosenberg would “reject Weak Scientism out of hand” (Brown 2017d, 7).

Like other proponents of scientism, Rosenberg should endorse Weak Scientism because, unlike Strong Scientism, Weak Scientism is a defensible view. Insofar as we should endorse the view that has the most evidence in its favor, Weak Scientism has more going for it than Strong Scientism does. For to show that Strong Scientism is true, one would have to show that no field of study other than scientific ones can produce knowledge. Of course, that is not easy to show. To show that Weak Scientism is true, one only needs to show that the knowledge produced in scientific fields of study is better (in certain relevant respects) than the knowledge produced in non-scientific fields.

That is precisely what I show in my (2017a). I argue that the knowledge produced in scientific fields is quantitatively better than the knowledge produced in non-scientific fields because there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and the former has a greater impact than the latter (as measured by research impact). I also argue that the knowledge produced in scientific fields is qualitatively better than knowledge produced in non-scientific fields because it is more explanatorily, instrumentally, and predictively successful.

Contrary to what Brown (2017b, 7) seems to think, I do not have to show “that there is real knowledge other than scientific knowledge.” To defend Weak Scientism, all I have to show is that scientific knowledge is better (in certain relevant respects) than non-scientific knowledge. If anyone must argue for the claim that there is real knowledge other than scientific knowledge, it is Brown, for he wants to defend the value or usefulness of non-scientific knowledge, specifically, philosophical knowledge.

It is important to emphasize the point about the ways in which scientific knowledge is quantitatively and qualitatively better than non-scientific knowledge because it looks like Brown has confused the two. For he thinks that I justify my quantitative analysis of scholarly publications in scientific and non-scientific fields by “citing the precedent of epistemologists who often treat all items of knowledge as qualitatively the same” (Brown 2017b, 22; emphasis added).

Here Brown fails to carefully distinguish between my claim that scientific knowledge is quantitatively better than non-scientific knowledge and my claim that scientific knowledge is qualitatively better than non-scientific knowledge. For the purposes of a quantitative study of knowledge, information and data scientists can do precisely what epistemologists do and “abstract from various circumstances (by employing variables)” (Brown 2017b, 22) in order to determine which knowledge is quantitatively better.

How Is Weak Scientism Relevant to the Claim that Philosophy Is Useless?

Brown (2017b, 7-8) argues for (3) on the grounds that “Weak Scientism itself implies nothing about the degree to which philosophical knowledge is valuable or useful other than stating scientific knowledge is better than philosophical knowledge” (emphasis in original).

Strictly speaking, Brown is wrong about this because Weak Scientism does imply something about the degree to which scientific knowledge is better than philosophical knowledge. Recall that to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that scientific fields of study publish more research and that scientific research has greater impact than the research published in non-scientific fields of study.

Contrary to what Brown seems to think, we can say to what degree scientific research is superior to non-scientific research in terms of output and impact. That is precisely what bibliometric indicators like h-index and other metrics are for (Rousseau et al. 2018). Such bibliometric indicators allow us to say how many articles are published in a given field, how many of those published articles are cited, and how many times they are cited. For instance, according to Scimago Journal & Country Rank (2018), which contains data from the Scopus database, of the 3,815 Philosophy articles published in the United States in 2016-2017, approximately 14% are cited, and their h-index is approximately 160.

On the other hand, of the 24,378 Psychology articles published in the United States in 2016-2017, approximately 40% are cited, and their h-index is approximately 640. Contrary to what Brown seems to think, then, we can say to what degree research in Psychology is better than research in Philosophy in terms of research output (i.e., number of publications) and research impact (i.e., number of citations). We can use the same bibliometric indicators and metrics to compare research in other scientific and non-scientific fields of study.

As I have already said in my previous reply to Brown, “Weak Scientism does not entail that philosophy is useless” and “I have no interest in defending the charge that philosophy is useless” (Mizrahi 2017b, 11-12). So, I am not sure why Brown brings up (3) again. Since he insists, however, let me explain why philosophers who are concerned about the charge that philosophy is useless should engage with Weak Scientism as well.

Suppose that a foundation or agency is considering whether to give a substantial grant to one of two projects. The first project is that of a philosopher who will sit in her armchair and contemplate the nature of friendship.[3] The second project is that of a team of social scientists who will conduct a longitudinal study of the effects of friendship on human well-being (e.g., Yang et al. 2016).

If Weak Scientism is true, and the foundation or agency wants to fund the project that is likely to yield better results, then it should give the grant to the team of social scientists rather than to the armchair philosopher simply because the former’s project is scientific, whereas the latter’s is not. This is because the scientific project will more likely yield better knowledge than the non-scientific project will. In other words, unlike the project of the armchair philosopher, the scientific project will probably produce more research (i.e., more publications) that will have a greater impact (i.e., more citations) and the knowledge produced will be explanatorily, instrumentally, and predictively more successful than any knowledge that the philosopher’s project might produce.

This example should really hit home for Brown, since reading his latest attack on Weak Scientism gives one the impression that he thinks of philosophy as a personal, “self-improvement” kind of enterprise, rather than an academic discipline or field of study. For instance, he seems to be saying that philosophy is not in the business of producing “new knowledge” or making “discoveries” (Brown 2017b, 17).

Rather, Brown (2017b, 18) suggests that philosophy “is more about individual intellectual progress rather than collective intellectual progress.” Individual progress or self-improvement is great, of course, but I am not sure that it helps Brown’s case in defense of philosophy against what he sees as “the menace of scientism.” For this line of thinking simply adds fuel to the fire set by those who want to see philosophy burn. As I point out in my (2017a), scientists who dismiss philosophy do so because they find it academically useless.

For instance, Hawking and Mlodinow (2010, 5) write that ‘philosophy is dead’ because it ‘has not kept up with developments in science, particularly physics’ (emphasis added). Similarly, Weinberg (1994, 168) says that, as a working scientist, he ‘finds no help in professional philosophy’ (emphasis added). (Mizrahi 2017a, 356)

Likewise, Richard Feynman is rumored to have said that “philosophy of science is about as useful to scientists as ornithology is to birds” (Kitcher 1998, 32). It is clear, then, that what these scientists complain about is professional or academic philosophy. Accordingly, they would have no problem with anyone who wants to pursue philosophy for the sake of “individual intellectual progress.” But that is not the issue here. Rather, the issue is academic knowledge or research.

Does My Defense of Weak Scientism Appeal to Controversial Philosophical Assumptions?

Brown (2017b, 9) argues for (4) on the grounds that I assume that “we are supposed to privilege empirical (I read Mizrahi’s ‘empirical’ here as ‘experimental/scientific’) evidence over non-empirical evidence.” But that is question-begging, Brown claims, since he takes me to be assuming something like the following: “If the question of whether scientific knowledge is superior to [academic] non-scientific knowledge is a question that one can answer empirically, then, in order to pose a serious challenge to my [Mizrahi’s] defense of Weak Scientism, Brown must come up with more than mere ‘what ifs’” (Mizrahi 2017b, 10; quoted in Brown 2017b, 8).

This objection seems to involve a confusion about how defeasible reasoning and defeating evidence are supposed to work. Given that “a rebutting defeater is evidence which prevents E from justifying belief in H by supporting not-H in a more direct way” (Kelly 2016), claims about what is actual cannot be defeated by mere possibilities, since claims of the form “Possibly, p” do not prevent a piece of evidence from justifying belief in “Actually, p” by supporting “Actually, not-p” directly.

For example, the claim “Hillary Clinton could have been the 45th President of the United States” does not prevent my perceptual and testimonial evidence from justifying my belief in “Donald Trump is the 45th President of the United States,” since the former does not support “It is not the case that Donald Trump is the 45th President of the United States” in a direct way. In general, claims of the form “Possibly, p” are not rebutting defeaters against claims of the form “Actually, p.” Defeating evidence against claims of the form “Actually, p” must be about what is actual (or at least probable), not what is merely possible, in order to support “Actually, not-p” directly.

For this reason, although “the production of some sorts of non-scientific knowledge work may be harder than the production of scientific knowledge” (Brown 2017b, 19), Brown gives no reasons to think that it is actually or probably harder, which is why this possibility does nothing to undermine the claim that scientific knowledge is actually better than non-scientific knowledge. Just as it is possible that philosophical knowledge is harder to produce than scientific knowledge, it is also possible that scientific knowledge is harder to produce than philosophical knowledge. It is also possible that scientific and non-scientific knowledge are equally hard to produce.

Similarly, the possibility that “a little knowledge about the noblest things is more desirable than a lot of knowledge about less noble things” (Brown 2017b, 19), whatever “noble” is supposed to mean here, does not prevent my bibliometric evidence (in terms of research output and research impact) from justifying the belief that scientific knowledge is better than non-scientific knowledge. Just as it is possible that philosophical knowledge is “nobler” (whatever that means) than scientific knowledge, it is also possible that scientific knowledge is “nobler” than philosophical knowledge or that they are equally “noble” (Mizrahi 2017b, 9-10).

In fact, even if Brown (2017a, 47) is right that “philosophy is harder than science” and that “knowing something about human persons–particularly qua embodied rational being–is a nobler piece of knowledge than knowing something about any non-rational object” (Brown 2017b, 21), whatever “noble” is supposed to mean here, it would still be the case that scientific fields produce more knowledge (as measured by research output), and more impactful knowledge (as measured by research impact), than non-scientific disciplines.

So, I am not sure why Brown keeps insisting on mentioning these mere possibilities. He also seems to forget that the natural and social sciences study human persons as well. Even if knowledge about human persons is “nobler” (whatever that means), there is a lot of scientific knowledge about human persons coming from scientific fields, such as anthropology, biology, genetics, medical science, neuroscience, physiology, psychology, and sociology, to name just a few.

One of the alleged “controversial philosophical assumptions” that my defense of Weak Scientism rests on, and that Brown (2017a) complains about the most in his previous attack on Weak Scientism, is my characterization of philosophy as the scholarly work that professional philosophers do. In my previous reply, I argue that Brown is not in a position to complain that this is a “controversial philosophical assumption,” since he rejects my characterization of philosophy as the scholarly work that professional philosophers produce, but he does not tell us what counts as philosophical (Mizrahi 2017b, 13). Well, it turns out that Brown does not reject my characterization of philosophy after all. For, after he was challenged to say what counts as philosophical, he came up with the following “sufficient condition for pieces of writing and discourse that count as philosophy” (Brown 2017b, 11):

(P) Those articles published in philosophical journals and what academics with a Ph.D. in philosophy teach in courses at public universities with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science (Brown 2017b, 11; emphasis added).

Clearly, this is my characterization of philosophy in terms of the scholarly work that professional philosophers produce. Brown simply adds teaching to it. Since he admits that “scientists teach students too” (Brown 2017b, 18), however, it is not clear how adding teaching to my characterization of philosophy is supposed to support his attack on Weak Scientism. In fact, it may actually undermine his attack on Weak Scientism, since there is a lot more teaching going on in STEM fields than in non-STEM fields.

According to data from the National Center for Education Statistics (2017), in the 2015-16 academic year, post-secondary institutions in the United States conferred only 10,157 Bachelor’s degrees in philosophy and religious studies compared to 113,749 Bachelor’s degrees in biological and biomedical sciences, 106,850 Bachelor’s degrees in engineering, and 117,440 in psychology. In general, in the 2015-2016 academic year, 53.3% of the Bachelor’s degrees conferred by post-secondary institutions in the United States were degrees in STEM fields, whereas only 5.5% of conferred Bachelor’s degrees were in the humanities (Figure 1).

Figure 1. Bachelor’s degrees conferred by post-secondary institutions in the US, by field of study, 2015-2016 (Source: NCES)

 

Clearly, then, there is a lot more teaching going on in science than in philosophy (or even in the humanities in general), since a lot more students take science courses and graduate with degrees in scientific fields of study. So, even if Brown is right that we should include teaching in what counts as philosophy, it is still the case that scientific fields are quantitatively better than non-scientific fields.

Since Brown (2017b, 13) seems to agree that philosophy (at least in part) is the scholarly work that academic philosophers produce, it is peculiar that he complains, without argument, that “an understanding of philosophy and knowledge as operational is […] shallow insofar as philosophy and knowledge can’t fit into the narrow parameters of another empirical study.” Once Brown (2017b, 11) grants that “Those articles published in philosophical journals” count as philosophy, he thereby also grants that these journal articles can be studied empirically using the methods of bibliometrics, information science, or data science.

That is, Brown (2017b, 11) concedes that philosophy consists (at least in part) of “articles published in philosophical journals,” and so these articles can be compared to other articles published in science journals to determine research output, and they can also be compared to articles published in science journals in terms of citation counts to determine research impact. What exactly is “shallow” about that? Brown does not say.

A, perhaps unintended, consequence of Brown’s (P) is that the “great thinkers from the past” (Brown 2017b, 18), those that Brown (2017b, 13) likes to remind us “were not professional philosophers,” did not do philosophy, by Brown’s own lights. For “Socrates, Plato, Augustine, Descartes, Locke, and Hume” (Brown 2017b, 13) did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Another peculiar thing about Brown’s (P) is the restriction of the philosophical to what is being taught in public universities. What about community colleges and private universities? Is Brown suggesting that philosophy courses taught at private universities do not count as philosophy courses? This is peculiar, especially in light of the fact that, at least according to The Philosophical Gourmet Report (Brogaard and Pynes 2018), the top ranked philosophy programs in the United States are mostly located in private universities, such as New York University and Princeton University.

Is My Defense of Weak Scientism a Scientific or a Philosophical Argument?

Brown argues for (5) on the grounds that my (2017a) is published in a philosophy journal, namely, Social Epistemology, and so it a piece of philosophical knowledge by my lights, since I count as philosophy the research articles that are published in philosophy journals.

Brown would be correct about this if Social Epistemology were a philosophy journal. But it is not. Social Epistemology: A Journal of Knowledge, Culture and Policy is an interdisciplinary journal. The journal’s “aim and scope” statement makes it clear that Social Epistemology is an interdisciplinary journal:

Social Epistemology provides a forum for philosophical and social scientific enquiry that incorporates the work of scholars from a variety of disciplines who share a concern with the production, assessment and validation of knowledge. The journal covers both empirical research into the origination and transmission of knowledge and normative considerations which arise as such research is implemented, serving as a guide for directing contemporary knowledge enterprises (Social Epistemology 2018).

The fact that Social Epistemology is an interdisciplinary journal, with contributions from “Philosophers, sociologists, psychologists, cultural historians, social studies of science researchers, [and] educators” (Social Epistemology 2018) would not surprise anyone who is familiar with the history of the journal. The founding editor of the journal is Steve Fuller, who was trained in an interdisciplinary field, namely, History and Philosophy of Science (HPS), and is currently the Auguste Comte Chair in Social Epistemology in the Department of Sociology at Warwick University. Brown (2017b, 15) would surely agree that sociology is not philosophy, given that, for him, “cataloguing what a certain group of people believes is sociology and not philosophy.” The current executive editor of the journal is James H. Collier, who is a professor of Science and Technology in Society at Virginia Tech, and who was trained in Science and Technology Studies (STS), which is an interdisciplinary field as well.

Brown asserts without argument that the methods of a scientific field of study, such as sociology, are different in kind from those of philosophy: “What I contend is that […] philosophical methods are different in kind from those of the experimental scientists [sciences?]” (Brown 2017b, 24). He then goes on to speculate about what it means to say that an explanation is testable (Brown 2017b, 25). What Brown comes up with is rather unclear to me. For instance, I have no idea what it means to evaluate an explanation by inductive generalization (Brown 2017b, 25).

Instead, Brown should have consulted any one of the logic and reasoning textbooks I keep referring to in my (2017a) and (2017b) to find out that it is generally accepted among philosophers that the good-making properties of explanations, philosophical and otherwise, include testability among other good-making properties (see, e.g., Sinnott-Armstrong and Fogelin 2010, 257). As far as testability is concerned, to test an explanation or hypothesis is to determine “whether predictions that follow from it are true” (Salmon 2013, 255). In other words, “To say that a hypothesis is testable is at least to say that some prediction made on the basis of that hypothesis may confirm or disconfirm it” (Copi et al. 2011, 515).

For this reason, Feser’s analogy according to which “to compare the epistemic values of science and philosophy and fault philosophy for not being good at making testable predications [sic] is like comparing metal detectors and gardening tools and concluding gardening tools are not as good as metal detectors because gardening tools do not allow us to successfully detect for metal” (Brown 2017b, 25), which Brown likes to refer to (Brown 2017a, 48), is inapt.

It is not an apt analogy because, unlike metal detectors and gardening tools, which serve different purposes, both science and philosophy are in the business of explaining things. Indeed, Brown admits that, like good scientific explanations, “good philosophical theories explain things” (emphasis in original). In other words, Brown admits that both scientific and philosophical theories are instruments of explanation (unlike gardening and metal-detecting instruments). To provide good explanations, then, both scientific and philosophical theories must be testable (Mizrahi 2017b, 19-20).

What Is Wrong with Persuasive Definitions of Scientism?

Brown (2017b, 31) argues for (6) on the grounds that “persuasive definitions are [not] always dialectically pernicious.” He offers an argument whose conclusion is “abortion is murder” as an example of an argument for a persuasive definition of abortion. He then outlines an argument for a persuasive definition of scientism according to which “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32).

The problem, however, is that Brown is confounding arguments for a definition with the definition itself. Having an argument for a persuasive definition does not change the fact that it is a persuasive definition. To illustrate this point, let me give an example that I think Brown will appreciate. Suppose I define theism as an irrational belief in the existence of God. That is, “theism” means “an irrational belief in the existence of God.” I can also provide an argument for this definition:

P1: If it is irrational to have paradoxical beliefs and God is a paradoxical being, then theism is an irrational belief in the existence of God.

P2: It is irrational to have paradoxical beliefs and God is a paradoxical being (e.g., the omnipotence paradox).[4]

Therefore,

C: Theism is an irrational belief in the existence of God.

But surely, theists will complain that my definition of theism is a “dialectically pernicious” persuasive definition. For it stacks the deck against theists. It states that theists are already making a mistake, by definition, simply by believing in the existence of God. Even though I have provided an argument for this persuasive definition of theism, my definition is still a persuasive definition of theism, and my argument is unlikely to convince anyone who doesn’t already think that theism is irrational. Indeed, Brown (2017b, 30) himself admits that much when he says “good luck with that project!” about trying to construct a sound argument for “abortion is murder.” I take this to mean that pro-choice advocates would find his argument for “abortion is murder” dialectically inert precisely because it defines abortion in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept.

Likewise, theists would find the argument above dialectically inert precisely because it defines theism in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept. In other words, Brown seems to agree that there are good dialectical reasons to avoid appealing to persuasive definitions. Therefore, like “abortion is murder,” “theism is an irrational belief in the existence of God,” and “‘Homosexual’ means ‘one who has an unnatural desire for those of the same sex’” (Salmon 2013, 65), “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32) is a “dialectically pernicious” persuasive definition (cf. Williams 2015, 14).

Like persuasive definitions in general, it “masquerades as an honest assignment of meaning to a term while condemning or blessing with approval the subject matter of the definiendum” (Hurley 2015, 101). As I have pointed out in my (2017a), the problem with such definitions is that they “are strategies consisting in presupposing an unaccepted definition, taking a new unknowable description of meaning as if it were commonly shared” (Macagno and Walton 2014, 205).

As for Brown’s argument for the persuasive definition of Weak Scientism, according to which it “is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32), a key premise in this argument is the claim that there is a piece of philosophical knowledge that is better than scientific knowledge. This is premise 36 in Brown’s argument:

Some philosophers qua philosophers know that (a) true friendship is a necessary condition for human flourishing and (b) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for true friendship and (c) (therefore) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for human flourishing (see, e.g., the arguments in Plato’s Gorgias) and knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge (see, e.g., St. Augustine’s Confessions, book five, chapters iii and iv) [assumption]

There is a lot to unpack here, but I will focus on what I take to be the points most relevant to the scientism debate. First, Brown assumes 36 without argument, but why think it is true? In particular, why think that (a), (b), and (c) count as philosophical knowledge? Brown says that philosophers know (a), (b), and (c) in virtue of being philosophers, but he does not tell us why that is the case.

After all, accounts of friendship, with lessons about the significance of friendship, predate philosophy (see, e.g., the friendship of Gilgamesh and Enkidu in The Epic of Gilgamesh). Did it really take Plato and Augustine to tell us about the significance of friendship? In fact, on Brown’s characterization of philosophy, namely, (P), (a), (b), and (c) do not count as philosophical knowledge at all, since Plato and Augustine did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Second, some philosophers, like Epicurus, need (and think that others need) friends to flourish, whereas others, like Diogenes of Sinope, need no one. For Diogenes, friends will only interrupt his sunbathing (Arrian VII.2). My point is not simply that philosophers disagree about the value of friendship and human flourishing. Of course they disagree.[5]

Rather, my point is that, in order to establish general truths about human beings, such as “Human beings need friends to flourish,” one must employ the methods of science, such as randomization and sampling procedures, blinding protocols, methods of statistical analysis, and the like; otherwise, one would simply commit the fallacies of cherry-picking anecdotal evidence and hasty generalization (Salmon 2013, 149-151). After all, the claim “Some need friends to flourish” does not necessitate, or even make more probable, the truth of “Human beings need friends to flourish.”[6]

Third, why think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32)? Better in what sense? Quantitatively? Qualitatively? Brown does not tell us. He simply declares it “self-evident” (Brown 2017b, 32). I take it that Brown would not want to argue that “knowledge concerning the necessary conditions of human flourishing” is better than scientific knowledge in the quantitative (i.e., in terms of research output and research impact) and qualitative (i.e., in terms of explanatory, instrumental, and predictive success) respects in which scientific knowledge is better than non-scientific knowledge, according to Weak Scientism.

If so, then in what sense exactly “knowledge concerning the necessary conditions of human flourishing” (Brown 2017b, 32) is supposed to be better than scientific knowledge? Brown (2017b, 32) simply assumes that without argument and without telling us in what sense exactly “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32).

Of course, philosophy does not have a monopoly on friendship and human flourishing as research topics. Psychologists and sociologists, among other scientists, work on friendship as well (see, e.g., Hojjat and Moyer 2017). To get an idea of how much research on friendship is done in scientific fields, such as psychology and sociology, and how much is done in philosophy, we can use a database like Web of Science.

Currently (03/29/2018), there are 12,334 records in Web of Science on the topic “friendship.” Only 76 of these records (0.61%) are from the Philosophy research area. Most of the records are from the Psychology (5,331 records) and Sociology (1,111) research areas (43.22% and 9%, respectively). As we can see from Figure 2, most of the research on friendship is done in scientific fields of study, such as psychology, sociology, and other social sciences.

Figure 2. Number of records on the topic “friendship” in Web of Science by research area (Source: Web of Science)

 

In terms of research impact, too, scientific knowledge about friendship is superior to philosophical knowledge about friendship. According to Web of Science, the average citations per year for Psychology research articles on the topic of friendship is 2826.11 (h-index is 148 and the average citations per item is 28.1), and the average citations per year for Sociology research articles on the topic of friendship is 644.10 (h-index is 86 and the average citations per item is 30.15), whereas the average citations per year for Philosophy research articles on friendship is 15.02 (h-index is 13 and the average citations per item is 8.11).

Quantitatively, then, psychological and sociological knowledge on friendship is better than philosophical knowledge in terms of research output and research impact. Both Psychology and Sociology produce significantly more research on friendship than Philosophy does, and the research they produce has significantly more impact (as measured by citation counts) than philosophical research on the same topic.

Qualitatively, too, psychological and sociological knowledge about friendship is better than philosophical knowledge about friendship. For, instead of rather vague statements about how “true friendship is a necessary condition for human flourishing” (Brown 2017b, 32) that are based on mostly armchair speculation, psychological and sociological research on friendship provides detailed explanations and accurate predictions about the effects of friendship (or lack thereof) on human well-being.

For instance, numerous studies provide evidence for the effects of friendships or lack of friendships on physical well-being (see, e.g., Yang et al. 2016) as well as mental well-being (see, e.g., Cacioppo and Patrick 2008). Further studies provide explanations for the biological and genetic bases of these effects (Cole et al. 2011). This knowledge, in turn, informs interventions designed to help people deal with loneliness and social isolation (see, e.g., Masi et al. 2010).[7]

To sum up, Brown (2017b, 32) has given no reasons to think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge.” He does not even tell us what “better” is supposed to mean here. He also ignores the fact that scientific fields of study, such as psychology and sociology, produce plenty of knowledge about human flourishing, both physical and mental well-being. In fact, as we have seen, science produces a lot more knowledge about topics related to human well-being, such as friendship, than philosophy does. For this reason, Brown (2017b, 32) has failed to show that “there is non-scientific form of knowledge better than scientific knowledge.”

Conclusion

At this point, I think it is quite clear that Brown and I are talking past each other on a couple of levels. First, I follow scientists (e.g., Weinberg 1994, 166-190) and philosophers (e.g., Haack 2007, 17-18 and Peels 2016, 2462) on both sides of the scientism debate in treating philosophy as an academic discipline or field of study, whereas Brown (2017b, 18) insists on thinking about philosophy as a personal activity of “individual intellectual progress.” Second, I follow scientists (e.g., Hawking and Mlodinow 2010, 5) and philosophers (e.g., Kidd 2016, 12-13 and Rosenberg 2011, 307) on both sides of the scientism debate in thinking about knowledge as the scholarly work or research produced in scientific fields of study, such as the natural sciences, as opposed to non-scientific fields of study, such as the humanities, whereas Brown insists on thinking about philosophical knowledge as personal knowledge.

To anyone who wishes to defend philosophy’s place in research universities alongside academic disciplines, such as history, linguistics, and physics, armed with this conception of philosophy as a “self-improvement” activity, I would use Brown’s (2017b, 30) words to say, “good luck with that project!” A much more promising strategy, I propose, is for philosophy to embrace scientific ways of knowing and for philosophers to incorporate scientific methods into their research.[8]

Contact details: mmizrahi@fit.edu

References

Arrian. “The Final Phase.” In Alexander the Great: Selections from Arrian, Diodorus, Plutarch, and Quintus Curtius, edited by J. Romm, translated by P. Mensch and J. Romm, 149-172. Indianapolis, IN: Hackett Publishing Company, Inc., 2005.

Ashton, Z., and M. Mizrahi. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the “Received Wisdom” about Armchair Philosophy.” Erkenntnis (2017): DOI 10.1007/s10670-017-9904-4.

Ashton, Z., and M. Mizrahi. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018): 58-70.

Cacioppo, J. T., and W. Patrick. Loneliness: Human Nature and the Need for Social Connection. New York: W. W. Norton & Co., 2008.

Cole, S. W., L. C. Hawkley, J. M. G. Arevaldo, and J. T. Cacioppo. “Transcript Origin Analysis Identifies Antigen-Presenting Cells as Primary Targets of Socially Regulated Gene Expression in Leukocytes.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 3080-3085.

Copi, I. M., C. Cohen, and K. McMahon. Introduction to Logic. Fourteenth Edition. New York: Prentice Hall, 2011.

Brogaard, B., and C. A. Pynes (eds.). “Overall Rankings.” The Philosophical Gourmet Report. Wiley Blackwell, 2018. Available at http://34.239.13.205/index.php/overall-rankings/.

Brown, C. M. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017a): 42-54.

Brown, C. M. “Defending Some Objections to Moti Mizrahi’s Arguments Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2017b): 1-35.

Haack, S. Defending Science–within Reason: Between Scientism and Cynicism. New York: Prometheus Books, 2007.

Hawking, S., and L. Mlodinow. The Grand Design. New York: Bantam Books, 2010.

Hojjat, M., and A. Moyer (eds.). The Psychology of Friendship. New York: Oxford University Press, 2017.

Hurley, P. J. A Concise Introduction to Logic. Twelfth Edition. Stamford, CT: Cengage Learning, 2015.

Kelly, T. “Evidence.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/entries/evidence/.

Kidd, I. J. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5 (2016): 11–17.

Kitcher, P. “A Plea for Science Studies.” In A House Built on Sand: Exposing Postmodernist Myths about Science, edited by N. Koertge, 32–55. New York: Oxford University Press, 1998.

Lewis, C. S. The Four Loves. New York: Harcourt Brace & Co., 1960.

Macagno, F., and D. Walton. Emotive Language in Argumentation. New York: Cambridge University Press, 2014.

Masi, C. M., H. Chen, and L. C. Hawkley. “A Meta-Analysis of Interventions to Reduce Loneliness.” Personality and Social Psychology Review 15, no. 3 (2011): 219-266.

Mizrahi, M. “Intuition Mongering.” The Reasoner 6, no. 11 (2012): 169-170.

Mizrahi, M. “More Intuition Mongering.” The Reasoner 7, no. 1 (2013a): 5-6.

Mizrahi, M. “What is Scientific Progress? Lessons from Scientific Practice.” Journal for General Philosophy of Science 44, no. 2 (2013b): 375-390.

Mizrahi, M. “New Puzzles about Divine Attributes.” European Journal for Philosophy of Religion 5, no. 2 (2013c): 147-157.

Mizrahi, M. “The Pessimistic Induction: A Bad Argument Gone Too Far.” Synthese 190, no. 15 (2013d): 3209-3226.

Mizrahi, M. “Does the Method of Cases Rest on a Mistake?” Review of Philosophy and Psychology 5, no. 2 (2014): 183-197.

Mizrahi, M. “On Appeals to Intuition: A Reply to Muñoz-Suárez.” The Reasoner 9, no. 2 (2015a): 12-13.

Mizrahi, M. “Don’t Believe the Hype: Why Should Philosophical Theories Yield to Intuitions?” Teorema: International Journal of Philosophy 34, no. 3 (2015b): 141-158.

Mizrahi, M. “Historical Inductions: New Cherries, Same Old Cherry-Picking.” International Studies in the Philosophy of Science 29, no. 2 (2015c): 129-148.

Mizrahi, M. “Three Arguments against the Expertise Defense.” Metaphilosophy 46, no. 1 (2015d): 52-64.

Mizrahi, M. “The History of Science as a Graveyard of Theories: A Philosophers’ Myth?” International Studies in the Philosophy of Science 30, no. 3 (2016): 263-278.

Mizrahi, M. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, M. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, M. “Introduction.” In The Kuhnian Image of Science: Time for a Decisive Transformation? Edited by M. Mizrahi, 1-22. London: Rowman & Littlefield, 2017c.

National Center for Education Statistics. “Bachelor’s degrees conferred by postsecondary institutions, by field of study: Selected years, 1970-71 through 2015-16.” Digest of Education Statistics (2017). https://nces.ed.gov/programs/digest/d17/tables/dt17_322.10.asp?current=yes.

Peels, R. “The Empirical Case Against Introspection.” Philosophical Studies 17, no. 9 (2016): 2461-2485.

Peels, R. “Ten Reasons to Embrace Scientism.” Studies in History and Philosophy of Science Part A 63 (2017): 11-21.

Rosenberg, A. The Atheist’s Guide to Reality: Enjoying Life Without Illusions. New York: W. W. Norton, 2011.

Rousseau, R., L. Egghe, and R. Guns. Becoming Metric-Wise: A Bibliometric Guide for Researchers. Cambridge, MA: Elsevier, 2018.

Salmon, M. H. Introduction to Logic and Critical Thinking. Sixth Edition. Boston, MA: Wadsworth, 2013.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on April 3, 2018. http://www.scimagojr.com/mapgen.php?maptype=bc&country=US&y=citd.

Sinnott-Armstrong, W., and R. J. Fogelin. Understanding Arguments: An Introduction to Informal Logic. Eighth Edition. Belmont, CA: Wadsworth Cengage Learning, 2010.

Social Epistemology. “Aims and Scope.” Social Epistemology: A Journal of Knowledge, Culture and Policy (2018). https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=tsep20.

Weinberg, S. Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. New York: Random House, 1994.

Williams, R. N. “Introduction.” In Scientism: The New Orthodoxy, edited by R. N. Williams and D. N. Robinson, 1-22. New York: Bloomsbury Academic, 2015.

Yang, C. Y., C. Boen, K. Gerken, T. Li, K. Schorpp, and K. M. Harris. “Social Relationships and Physiological Determinants of Longevity Across the Human Life Span.” Proceedings of the National Academy of Sciences 113, no. 3 (2016): 578-583.

[1] I thank Adam Riggio for inviting me to respond to Brown’s second attack on Weak Scientism.

[2] On why appeals to intuition are bad arguments, see Mizrahi (2012), (2013a), (2014), (2015a), (2015b), and (2015d).

[3] I use friendship as an example here because Brown (2017b, 31) uses it as an example of philosophical knowledge. I will say more about that in Section 6.

[4] For more on paradoxes involving the divine attributes, see Mizrahi (2013c).

[5] “Friendship is unnecessary, like philosophy, like art, like the universe itself (for God did not need to create)” (Lewis 1960, 71).

[6] On fallacious inductive reasoning in philosophy, see Mizrahi (2013d), (2015c), (2016), and (2017c).

[7] See also “The Friendship Bench” project: https://www.friendshipbenchzimbabwe.org/.

[8] For recent examples, see Ashton and Mizrahi (2017) and (2018).

Special issue of Social Epistemology on Psychology of Science and Technology (PDF)

Editors:
Greg Feist, San Jose State University, (gregfeist@gmail.com)
Michael E. Gorman, University of Virginia, (meg3cstar@gmail.com)

This special issue calls for papers from any discipline that focuses on the psychological dimensions of science and technology, and can also include book reviews, essays, commentaries, less formal research pieces, and replies to articles published in other journals. Deadline for submissions is February 1 of each year; late manuscripts will automatically be considered for the next year, and can, of course, be labeled In Press if and when they go through review and are accepted. Accepted articles go online and get DOI numbers well before they appear in print about a year after submission.

Psychology of science can include:

  • Cognitive Science: the kind of thinking and problem-solving strategies that are used by scientists and engineers. Here work in history of science and technology can make a great contribution to the psychological understanding of how scientists think and work. Cognitive scientists also have a great deal to contribute here, including computational models of scientific processes that can be tested empirically.
  • Personality: what sorts of people go into science and engineering, and are there personality types that prefer this kind of work and do better at it?
  • Social psychology: the way in which scientists and engineers cooperate and compete with each other, how collaborative teams form,  what kinds of social norms emerge in laboratories, teams, disciplines (normal science) and how are these taught to newcomers and what leads them to change?
  • Sociology: Is science a unique form of human activity, or does it resemble most other human activities in terms of the kinds of norms that are developed and the way controversies are resolved? What are the contents of scientific and technological expertise and what (if anything) distinguishes them from other forms of expertise? (Here the work of the Studies of Expertise and Experience group is especially relevant and welcome in these issues).
  • Anthropology: Here research focuses on immersion in science and engineering groups and communities, to get the perspective of insiders without ‘going native’.
  • Philosophy of science: What makes science and engineering different from other forms of inquiry? What epistemological issues do scientist face? Engineers?
  • Ethics: What constitutes ethical practice in science? In engineering?  Is it field-specific, or are there general norms (like Merton’s) that can cover a wide range of scientific and/or engineering disciplines?
  • Policy: The Science of Science and Innovation Policy community, the Center for Science Policy Outcomes and the Woodrow Wilson Center both do excellent work on what policies and strategies are most likely to produce science and technology outcomes that will at least do no harm and at best improve the future of our species and planet.

There are more communities of expertise than those listed above that could be mentioned. Because of the variety of disciplines that can contribute, we hope authors will remember that their methods and findings need to be described in ways that this broader readership could potentially understand.

For more information on psychology of science, see:

    Feist, G. and Gorman, M.E.  Handbook of the Psychology of Science.  Springer, 2013
    Gorman, M.E. (Editor) (2010) Cognition in science and technology. Topics in Cognitive Science. 1 (4): 675-776; 2 (1): 15-100.
    Gorman, M. E. (2008). Scientific and technological expertise. Journal of Psychology of Science and Technology, 1(1), 23-31.
    Feist, G. 2006. The psychology of science and the origins of the scientific mind. New Haven: Yale University Press.
    Gorman, M. E., Tweney, R. D., Gooding, D. C., & Kincannon, A. (Eds.). (2005). Scientific and technological thinking. Mahwah, NJ: Lawrence Erlbaum Associates.
    Feist, G., and M. E. Gorman. 1998. The psychology of science: Review and integration of a nascent discipline. Review of General Psychology 2 (1): 3-47.
    R. Shadish & S. Fuller (Eds) (1994) Social psychology of science., New York: Guilford Press: 3-123.

    Tweney, R. D. (1998). Towards a cognitive psychology of science: Recent research and its implications. Current Directions in Psychological Science 7 (5) (October): 150-3.
    Gorman, M. E., Simulating Science: Heuristics and Mental Models in Technoscientific Thinking.  Bloomington: Indiana University Press, 1992.

Direct inquiries to either or both of the editors above. Submit manuscripts to: http://www.tandfonline.com/toc/tsep20/current

Author Information: Ryan D. Tweney, Bowling Green State University, tweney@bgsu.edu

Tweney, Ryan D. “Commentary on Anderson and Feist’s ‘Transformative Science’.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Dx

Please refer to:

Image credit: Phylogeny Figures, via flickr

Traditionally, historic transformations in science were seen as the products of “great men”; Copernicus, Newton, Darwin, or, in the modern era, Einstein and Marie Curie. It was “genius” that propelled science to new levels of achievement and understanding. Such views have fallen out of favor as the collective efforts that go into scientific advances have come to be recognized, a change in perspective often attributed to Thomas Kuhn.

“Transformative Science” is a new phrase, now used even by funding agencies as one of the criteria for worthy projects. Barrett Anderson and Gregory Feist (2017), however, note how fuzzy the term has been and offer something like a definition. Transformative science, they suggest, is science that leads to a new branch on the “tree of knowledge.”

This is not a true definition, of course, since it is based upon a metaphor, one which is itself only fuzzily defined. Anderson and Feist note that the tree metaphor has been formalized in biology via cladistics. The present paper seeks to extend something similar to the domain of research evaluation. As with cladistics, if formal tools can be developed to measure aspects relevant to the growth of knowledge in science, then it may be that we will advance toward an understanding of transformative science. They thus propose a method for measuring the influence of a given, highly-cited, paper in a way potentially leading to the goal of identifying truly transformative results.

Plotting Generativity

Anderson and Feist’s exploratory study focused upon a single year of publication (2002) from a single field (psychology), selecting randomly some 887 articles that were among the top 10% of most highly cited articles. They then looked at the articles that had cited these 887, identifying those that were themselves among the most cited. They then developed a “generativity score” for each of the original articles. In effect, among the 887 articles, they singled out those that had generated the highest numbers of highly cited articles. Each of the 887 were then examined and coded for funding source.

Descriptively, both generativity and times cited were heavily skewed (Figures 6, 140, and 8, 141), leading the authors to carry out a log transformation of each (Figures 7 and 9, 141), in an attempt to normalize the distributions. They claim that this was successful for the generativity scores, but not for the number of times cited. But note that the plots are severely misleading. Since there are 887 articles in the sample, and the number of points on each graph is far smaller, it must be the case that multiple articles are hidden within each of the plotted points. Is it the case that the vast majority of the articles are somewhere in the middle of each distribution? At the lower end? At the upper end? If so, the claim that generativity was successfully normalized is suspect. This is even apparent from the graph (Figure 7, 141) which, while roughly bell-shaped (as far as the outer “envelope” of points is concerned), clearly must have a large majority of points that share the same value. Since the mean and median of “G log 10” (see Table 4, 140) are reported as roughly equal at around 1.0, these shared points must be at the lower end of the scale (below an untransformed generativity score of 10). A better plot, with the individual points “jittered” to separate them might then make the claim of approximate normality more convincing (Cleveland 1985).

Similar considerations applied to the times cited plots suggest a different distribution, though still far from normal, whether in raw scores or log transformed scores. Is it a Poisson distribution? Clearly not, since, in a Poisson, the mean and variance should be roughly equal. This is far from the case, whether raw scores or transformed scores are used.

The nature of the distribution matters here because Pearson r was used to determine the relationship between generativity and times cited. But Pearson’s statistic is only appropriate for determining the linear relationship between two bivariate normal variables. Anderson and Feist report the correlations as r = 0.87 for G and TC and 0.69 for G log10 and TC log 10.  This strikes me as meaningless, especially if there are large numbers of low generativity points masked by the lack of jittering (as suggested above). From the similarly unjittered scatterplots (Figures 10 and 11, 142), which are superficially, more-or-less bivariate linear, the points at the lower end look to be unrelated. This suggests that a small number of points at the upper end are pulling the regression line upwards, a possibility that recalls “Anscombe’s Quartet” (Tufte 2001, 14), a set of four relationships that each show a Pearson correlation of +0.82, but which are wildly different (see Figure 1 below).

Similar problems with non-normal distributions may affect analysis of the relationship between funding source, generativity, and times cited. In any case, these relationships are incredibly small—among the reported eta-squared values, the largest is only 0.014. Whether or not the result is significant is not the issue; a relationship between variables that accounts for only 1.4% of the variance is too small to be of practical significance. The best conclusion to draw from these data is that there is no relationship between funding source (or its absence) and either generativity or times cited.

Ways to Look at the Data

Anderson and Feist have, of course, given us an exploratory study, so statistical and graphic nitpicking is not the main point. Instead, the real value of the study has to lie in the directions it points and the issues it raises. What they refer to as the “structure” of citations is an important aspect of scientific literature and, indeed, one that has been overlooked. Their operational implementation of generativity is potentially important, and it suggests a number of new ways to look at their data. In particular (and in the spirit of seeking to move toward a true recognition of transformative science), more attention needs to paid to the extreme outliers in their data. Thus, both generativity and times cited show two (or more?) points at extremely large values in Figures 6 (140) and 8 (141). Are these the same two papers (assuming there are only two), as suggested by the scatterplot in Figure 10 (142)? And what are they and where did they appear? What can be said about their content, the content of the citing articles, and about the purposes for which they were cited? If they are methodological contributions, instead of articles that report a new phenomenon, we might draw different lessons from their structural extremity.

Many other questions could be raised using the existing data set. Is there a relationship between generativity and the lag in citations? That is, are highly generative articles more likely to show citations increasing over time, as one would expect if the influence of a generative article is to generate more research (which takes time and sometimes funding), rather than simply nods to something interesting. Or, similarly, what does the “decay” curve of citations look like? One might find large differences, even among relatively low generativity articles in their “half life,” thinking perhaps that truly generative articles have a longer half-life than even highly cited, otherwise seemingly generative, articles. There is a great deal more to be learned here.

Since this is an exploratory study, it would also make sense to use exploratory data analysis (Tukey 1970) to search for structural patterns in the data set. For example, one could plot the relation between generativity and times cited by dividing the generativity data by deciles and looking at the distribution of times cited for each decile; if the middle ranges of generativity had approximately bell-shaped distributions of times cited, then Pearson correlation coefficients might be appropriate for quantifying the middle range of the relationship.

Finally, since the goal is to obtain information about the structure of citations (rather than simply their number), aggregate statistics like means, correlation coefficients, and the like seem to rather miss the point. For example, is it the case that highly generative articles have chains of subsequent citations that branch off when new articles citing them become themselves highly cited? If so, and if non-generative articles (which by definition have simple “fan-like” patterns without branching), one would have a direct look at the structure of the network of citations.

At the end of the article, Anderson and Feist make a number of suggestions for further research, all of which suggest gathering more data. These are welcome suggestions and should indeed be pursued, even, as they acknowledge, truly transformative science must ultimately await the judgment of history. In the meantime, I hope that this intriguing contribution can be further strengthened, expanded, and subjected to further exploratory analysis.

References

Barrett R. Anderson and Gregory J. Feist. “Transformative Science: A New Index and the Impact of Non-Funding, Private Funding, and Public Funding.” Social Epistemology 31, no. 2 (2017): 130-151.

Cleveland, William S. The Elements of Graphing Data. Monterey, CA: Wadsworth, 1985.

Tufte, Edward R. The Visual Display of Quantitative Information (2nd ed.). Cheshire, CT: Graphics Press, 2001.

Tukey, John W. Exploratory Data Analysis. New York: Addison-Wesley, 1970.

Figure 1: Anscombe’s Quartet