Archives For research assessment

Author Information: Brian Martin, University of Wollongong,

Martin, Brian. “Bad Social Science.” Social Epistemology Review and Reply Collective 8, no. 3 (2019): 6-16.

The pdf of the article gives specific page references. Shortlink:

Image by Sanofi Pasteur via Flickr / Creative Commons


People untrained in social science frameworks and methods often make assumptions, observations or conclusions about the social world.[1] For example, they might say, “President Trump is a psychopath,” thereby making a judgement about Trump’s mental state. The point here is not whether this judgement is right or wrong, but whether it is based on a careful study of Trump’s thoughts and behaviour drawing on relevant expertise.

In most cases, the claim “President Trump is a psychopath” is bad psychology, in the sense that it is a conclusion reached without the application of skills in psychological diagnosis expected among professional psychologists and psychiatrists.[2] Even a non-psychologist can recognise cruder forms of bad psychology: they lack the application of standard tools in the field, such as comparison of criteria for psychopathy with Trump’s thought and behaviour.

“Bad social science” here refers to claims about society and social relationships that fall very far short of what social scientists consider good scholarship. This might be due to using false or misleading evidence, making faulty arguments, drawing unsupported conclusions or various other severe methodological, empirical or theoretical deficiencies.

In all sorts of public commentary and private conversations, examples of bad social science are legion. Instances are so common that it may seem pointless to take note of problems with ill-informed claims. However, there is value in a more systematic examination of different sorts of everyday bad social science. Such an examination can point to what is important in doing good social science and to weaknesses in assumptions, evidence and argumentation. It can also provide insights into how to defend and promote high-quality social analysis.

Here, I illustrate several facets of bad social science found in a specific public scientific controversy: the Australian vaccination debate. It is a public debate in which many partisans make claims about social dynamics, so there is ample material for analysis. In addition, because the debate is highly polarised, involves strong emotions and is extremely rancorous, it is to be expected that many deviations from calm, rational, polite discourse would be on display.

Another reason for selecting this topic is that I have been studying the debate for quite a number of years, and indeed have been drawn into the debate as a “captive of controversy.”[3] Several of the types of bad social science are found on both sides of the debate. Here, I focus mainly on pro-vaccination campaigners for reasons that will become clear.

In the following sections, I address several facets of bad social science: ad hominem attacks, not defining terms, use of limited and dubious evidence, misrepresentation, lack of reference to alternative viewpoints, lack of quality control, and drawing of unjustified conclusions. In each case, I provide examples from the Australian public vaccination debate, drawing on my experience. In a sense, selecting these topics represents an informal application of grounded theory: each of the shortcomings became evident to me through encountering numerous instances. After this, I note that there is a greater risk of deficient argumentation when defending orthodoxy.

With this background, I outline how studying bad social science can be of benefit in three ways: as a pointer to particular areas in which it is important to maintain high standards, as a toolkit for responding to attacks on social science, and as a reminder of the need to improve public understanding of social science approaches.

Ad Hominem

In the Australian vaccination debate, many partisans make adverse comments about opponents as a means of discrediting them. Social scientists recognise that ad hominem argumentation, namely attacking the person rather than dealing with what they say, is illegitimate for the purposes of making a case.

In the mid 1990s, Meryl Dorey founded the Australian Vaccination Network (AVN), which became the leading citizens’ group critical of government vaccination policy.[4] In 2009, a pro-vaccination citizens’ group called Stop the Australian Vaccination Network (SAVN) was set up with the stated aim of discrediting and shutting down the AVN.[5] SAVNers referred to Dorey with a wide range of epithets, for example “cunt.”[6]

What is interesting here is that some ad hominem attacks contain an implicit social analysis. One of them is “liar.” SAVNer Ken McLeod accused Dorey of being a liar, giving various examples.[7] However, some of these examples show only that Dorey persisted in making claims that SAVNers believed had been refuted.[8] This does not necessarily constitute lying, if lying is defined, as it often is by researchers in the area, as consciously intending to deceive.[9] To the extent that McLeod failed to relate his claims to research in the field, his application of the label “liar” constitutes bad social science.

Another term applied to vaccine critics is “babykiller.” In the Australian context, this word contains an implied social analysis, based on these premises: public questioning of vaccination policy causes some parents not to have their children vaccinated, leading to reduced vaccination rates and thence to more children dying of infectious diseases.

“Babykiller” also contains a moral judgement, namely that public critics of vaccination are culpable for the deaths of children from vaccination-preventable diseases. Few of those applying the term “babykiller” provide evidence to back up the implicit social analysis and judgement, so the label in these instances represents bad social science.

There are numerous other examples of ad hominem in the vaccination debate, on both sides. Some of them might be said to be primarily abuse, such as “cunt.” Others, though, contain an associated or implied social analysis, so to judge its quality it is necessary to assess whether the analysis conforms to conventions within social science.

Undefined terms

In social science, it is normal to define key concepts, either by explicit definitions or descriptive accounts. The point is to provide clarity when the concept is used.

One of the terms used by vaccination supporters in the Australian debate is “anti-vaxxer.” Despite the ubiquity of this term in social and mass media, I have never seen it defined. This is significant because of the considerable ambiguity involved. “Anti-vaxxer” might refer to parents who refuse all vaccines for their children and themselves, parents who have their children receive some but not all recommended vaccines, parents who express reservations about vaccination, and/or campaigners who criticise vaccination policy.

The way “anti-vaxxer” is applied in practice tends to conflate these different meanings, with the implication that any criticism of vaccination puts you in the camp of those who refuse all vaccines. The label “anti-vaxxer” has been applied to me even though I do not have a strong view about vaccination.[10]

Because of the lack of a definition or clear meaning, the term “anti-vaxxer” is a form of ad hominem and also represents bad social science. Tellingly, few social scientists studying the vaccination issue use the term descriptively.

In their publications, social scientists may not define all the terms they use because their meanings are commonly accepted in the field. Nearly always, though, some researchers pay close attention to any widely used concept.[11] When such a concept remains ill-defined, this may be a sign of bad social science — especially when it is used as a pejorative label.

Limited and Dubious Evidence

Social scientists normally seek to provide strong evidence for their claims and restrict their claims to what the evidence can support. In public debates, this caution is often disregarded.

After SAVN was formed in 2009, one of its initial claims was that the AVN believed in a global conspiracy to implant mind-control chips via vaccinations. The key piece of evidence SAVNers provided to support this claim was that Meryl Dorey had given a link to the website of David Icke, who was known to have some weird beliefs, such as that the world is ruled by shape-shifting reptilian humanoids.

The weakness of this evidence should be apparent. Just because Icke has some weird beliefs does not mean every document on his website involves adherence to weird beliefs, and just because Dorey provided a link to a document does not prove she believes in everything in the document, much less subscribes to the beliefs of the owner of the website. Furthermore, Dorey denied believing in a mind-control global conspiracy.

Finally, even if Dorey had believed in this conspiracy, this does not mean other members of the AVN, or the AVN as an organisation, believed in the conspiracy. Although the evidence was exceedingly weak, several SAVNers, after I confronted them on the matter, initially refused to back down from their claims.[12]


When studying an issue, scholars assume that evidence, sources and other material should be represented fairly. For example, a quotation from an author should fairly present the author’s views, and not be used out of context to show something different than what the author intended.

Quite a few campaigners in the Australian vaccination debate use a different approach, which might be called “gotcha”. Quotes are used to expose writers as incompetent, misguided or deluded. Views of authors are misrepresented as a means of discrediting and dismissing them.

Judy Wilyman did her PhD under my supervision and was the subject of attack for years before she graduated. On 13 January 2016, just two days after her thesis was posted online, it was the subject of a front-page story in the daily newspaper The Australian. The journalist, despite having been informed of a convenient summary of the thesis, did not mention any of its key ideas, instead claiming that it involved a conspiracy theory. Quotes from the thesis, taken out of context, were paraded as evidence of inadequacy.

This journalistic misrepresentation of Judy’s thesis was remarkably influential. It led to a cascade of hostile commentary, with hundreds of online comments on the numerous stories in The Australian, an online petition signed by thousands of people, and calls by scientists for Judy’s PhD to be revoked. In all the furore, not a single critic of her thesis posted a fair-minded summary of its contents.[13]

Alternative Viewpoints?

In high-quality social science, it is common to defend a viewpoint, but considered appropriate to examine other perspectives. Indeed, when presenting a critique, it is usual to begin with a summary of the work to be criticised.

In the Australian vaccination debate, partisans do not even attempt to present the opposing side’s viewpoint. I have never seen any campaigner provide a summary of the evidence and arguments supporting the opposition’s viewpoint. Vaccination critics present evidence and arguments that cast doubt on the government’s vaccination policy, and never try to summarise the evidence and arguments supporting it. Likewise, backers of the government’s policy never try to summarise the case against it.

There are also some intermediate viewpoints, divergent from the entrenched positions in the public debate. For example, there are some commentators who support some vaccines but not all the government-recommended ones, or who support single vaccines rather than multiple vaccines. These non-standard positions are hardly ever discussed in public by pro-vaccination campaigners.[14] More commonly, they are implicitly subsumed by the label “anti-vaxxer.”

To find summaries of arguments and evidence on both sides, it is necessary to turn to work by social scientists, and then only the few of them studying the debate without arguing for one side or the other.[15]

Quality Control

When making a claim, it makes sense to check it. Social scientists commonly do this by checking sources and/or by relying on peer review. For contemporary issues, it’s often possible to check with the person who made the claim.

In the Australian vaccination debate, there seems to be little attempt to check claims, especially when they are derogatory claims about opponents. I can speak from personal experience. Quite a number of SAVNers have made comments about my work, for example in blogs. On not a single occasion has any one of them checked with me in advance of publication.

After SAVN was formed and I started writing about free speech in the Australian vaccination debate, I sent drafts of some of my papers to SAVNers for comment. Rather than using this opportunity to send me corrections and comments, the response was to attack me, including by making complaints to my university.[16] Interestingly, the only SAVNer to have been helpful in commenting on drafts is another academic.

Another example concerns Andrew Wakefield, a gastroenterologist who was lead author of a paper in The Lancet suggesting that the possibility that the MMR triple vaccine (measles, mumps and rubella) might be linked to autism should be investigated. The paper led to a storm of media attention.

Australian pro-vaccination campaigns, and quite a few media reports, refer to Wakefield’s alleged wrongdoings, treating them as discrediting any criticism of vaccination. Incorrect statements about Wakefield are commonplace, for example that he lost his medical licence due to scientific fraud. It is a simple matter to check the facts, but apparently few do this. Even fewer take the trouble to look into the claims and counterclaims about Wakefield and qualify their statements accordingly.[17]

Drawing Conclusions

Social scientists are trained to be cautious in drawing conclusions, ensuring that they do not go beyond what can be justified from data and arguments. In addition, it is standard to include a discussion of limitations. This sort of caution is often absent in public debates.

SAVNers have claimed great success in their campaign against the AVN, giving evidence that, for example, their efforts have prevented AVN talks from being held and reduced media coverage of vaccine critics. However, although AVN operations have undoubtedly been hampered, this does not necessarily show that vaccination rates have increased or, more importantly, that public health has benefited.[18]

Defending Orthodoxy

Many social scientists undertake research in controversial areas. Some support the dominant views, some support an unorthodox position and quite a few try not to take a stand. There is no inherent problem in supporting the orthodox position, but doing so brings greater risks to the quality of research.

Many SAVNers assume that vaccination is a scientific issue and that only people with scientific credentials, for example degrees or publications in virology or epidemiology, have any credibility. This was apparent in an article by philosopher Patrick Stokes entitled “No, you’re not entitled to your opinion” that received high praise from SAVNers.[19] It was also apparent in the attack on Judy Wilyman, whose PhD was criticised because it was not in a scientific field, and because she analysed scientific claims without being a scientist. The claim that only scientists can validly criticise vaccination is easily countered.[20] The problem for SAVNers is that they are less likely to question assumptions precisely because they support the dominant viewpoint.

There is a fascinating aspect to campaigners supporting orthodoxy: they themselves frequently make claims about vaccination although they are not scientists with relevant qualifications. They do not apply their own strictures about necessary expertise to themselves. This can be explained as deriving from “honour by association,” a process parallel to guilt by association but less noticed because it is so common. In honour by association, a person gains or assumes greater credibility by being associated with a prestigious person, group or view.

Someone without special expertise who asserts a claim that supports orthodoxy implicitly takes on the mantle of the experts on the side of orthodoxy. It is only those who challenge orthodoxy who are expected to have relevant credentials. There is nothing inherently wrong with supporting the orthodox view, but it does mean there is less pressure to examine assumptions.

My initial example of bad social science was calling Donald Trump a psychopath. Suppose you said Trump has narcissistic personality disorder. This might not seem to be bad social science because it accords with the views of many psychologists. However, agreeing with orthodoxy, without accompanying deployment of expertise, does not constitute good social science any more than disagreeing with orthodoxy.


It is all too easy to identify examples of bad social science in popular commentary. They are commonplace in political campaigning and in everyday conversations.

Being attuned to common violations of good practice has three potential benefits: as a useful reminder to maintain high standards; as a toolkit for responding to attacks on social science; and as a guide to encouraging greater public awareness of social scientific thinking and methods.

Bad Social Science as a Reminder to Maintain High Standards

Most of the kinds of bad social science prevalent in the Australian vaccination debate seldom receive extended attention in the social science literature. For example, the widely used and cited textbook Social Research Methods does not even mention ad hominem, presumably because avoiding it is so basic that it need not be discussed.

It describes five common errors in everyday thinking that social scientists should avoid: overgeneralisation, selective observation, premature closure, the halo effect and false consensus.[21] Some of these overlap with the shortcomings I’ve observed in the Australian vaccination debate. For example, the halo effect, in which prestigious sources are given more credibility, has affinities with honour by association.

The textbook The Craft of Research likewise does not mention ad hominem. In a final brief section on the ethics of research, there are a couple of points that can be applied to the vaccination debate. For example, ethical researchers “do not caricature or distort opposing views.” Another recommendation is that “When you acknowledge your readers’ alternative views, including their strongest objections and reservations,” you move towards more reliable knowledge and honour readers’ dignity.[22] Compared with the careful exposition of research methods in this and other texts, the shortcomings in public debates are seemingly so basic and obvious as to not warrant extended discussion.

No doubt many social scientists could point to the work of others in the field — or even their own — as failing to meet the highest standards. Looking at examples of bad social science can provide a reminder of what to avoid. For example, being aware of ad hominem argumentation can help in avoiding subtle denigration of authors and instead focusing entirely on their evidence and arguments. Being reminded of confirmation bias can encourage exploration of a greater diversity of viewpoints.

Malcolm Wright and Scott Armstrong examined 50 articles that cited a method in survey-based research that Armstrong had developed years earlier. They discovered that only one of the 50 studies had reported the method correctly. They recommend that researchers send drafts of their work to authors of cited studies — especially those on which the research depends most heavily — to ensure accuracy.[23] This is not a common practice in any field of scholarship but is worth considering in the interests of improving quality.

Bad Social Science as a Toolkit for Responding to Attacks

Alan Sokal wrote an intentionally incoherent article that was published in 1996 in the cultural studies journal Social Text. Numerous commentators lauded Sokal for carrying out an audacious prank that revealed the truth about cultural studies, namely that it was bunk. These commentators had not carried out relevant studies themselves, nor were most of them familiar with the field of cultural studies, including its frameworks, objects of study, methods of analysis, conclusions and exemplary pieces of scholarship.

To the extent that these commentators were uninformed about cultural studies yet willing to praise Sokal for his hoax, they were involved in a sort of bad social science. Perhaps they supported Sokal’s hoax because it agreed with their preconceived ideas, though investigation would be needed to assess this hypothesis.

Most responses to the hoax took a defensive line, for example arguing that Sokal’s conclusions were not justified. Only a few argued that interpreting the hoax as showing the vacuity of cultural studies was itself poor social science.[24] Sokal himself said it was inappropriate to draw general conclusions about cultural studies from the hoax,[25] so ironically it would have been possible to respond to attackers by quoting Sokal.

When social scientists come under attack, it can be useful to examine the evidence and methods used or cited by the attackers, and to point out, as is often the case, that they fail to measure up to standards in the field.

Encouraging Greater Public Awareness of Social Science Thinking and Methods

It is easy to communicate with like-minded scholars and commiserate about the ignorance of those who misunderstand or wilfully misrepresent social science. More challenging is to pay close attention to the characteristic ways in which people make assumptions and reason about the social world and how these ways often fall far short of the standards expected in scholarly circles.

By identifying common forms of bad social science, it may be possible to better design interventions into public discourse to encourage more rigorous thinking about evidence and argument, especially to counter spurious and ill-founded claims by partisans in public debates.


Social scientists, in looking at research contributions, usually focus on what is high quality: the deepest insights, the tightest arguments, the most comprehensive data, the most sophisticated analysis and the most elegant writing. This makes sense: top quality contributions offer worthwhile models to learn from and emulate.

Nevertheless, there is also a role for learning from poor quality contributions. It is instructive to look at public debates involving social issues in which people make judgements about the same sorts of matters that are investigated by social scientists, everything from criminal justice to social mores. Contributions to public debates can starkly show flaws in reasoning and the use of evidence. These flaws provide a useful reminder of things to avoid.

Observation of the Australian vaccination debate reveals several types of bad social science, including ad hominem attacks, failing to define terms, relying on dubious sources, failing to provide context, and not checking claims. The risk of succumbing to these shortcomings seems to be magnified when the orthodox viewpoint is being supported, because it is assumed to be correct and there is less likelihood of being held accountable by opponents.

There is something additional that social scientists can learn by studying contributions to public debates that have serious empirical and theoretical shortcomings. There are likely to be characteristic failures that occur repeatedly. These offer supplementary guidance for what to avoid. They also provide insight into what sort of training, for aspiring social scientists, is useful for moving from unreflective arguments to careful research.

There is also a challenge that few scholars have tackled. Given the prevalence of bad social science in many public debates, is it possible to intervene in these debates in a way that fosters greater appreciation for what is involved in good quality scholarship, and encourages campaigners to aspire to make sounder contributions?

Contact details:


Blume, Stuart. Immunization: How Vaccines Became Controversial. London: Reaktion Books, 2017.

Booth, Wayne C.; Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition. Chicago: University of Chicago Press, 2016.

Collier, David; Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

Ekman, Paul. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. New York: Norton, 1985.

Hilgartner, Stephen, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522.

Lee, Bandy X. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President. New York: St. Martin’s Press, 2017.

Martin, Brian; and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

Martin, Brian. “Debating vaccination: understanding the attack on the Australian Vaccination Network.” Living Wisdom, no. 8, 2011, pp. 14–40.

Martin, Brian. “On the suppression of vaccination dissent.” Science & Engineering Ethics. Vol. 21, No. 1, 2015, pp. 143–157.

Martin, Brian. Evidence-based campaigning. Archives of Public Health, 76, no. 54. (2018),

Martin, Brian. Vaccination Panic in Australia. Sparsnäs, Sweden: Irene Publishing, 2018.

Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010,

Neuman, W. Lawrence. Social Research Methods: Qualitative and Quantitative Approaches, seventh edition. Boston, MA: Pearson, 2011.

Scott, Pam; Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

Sokal, Alan D. “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22

Stokes, Patrick. “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012,

Wright, Malcolm, and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008.

[1] Thanks to Meryl Dorey, Stephen Hilgartner, Larry Neuman, Alan Sokal and Malcolm Wright for valuable feedback on drafts.

[2] For informed commentary on these issues, see Bandy X. Lee, The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President (New York: St. Martin’s Press, 2017).

[3] Pam Scott, Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

[4] The AVN, forced to change its name in 2014, became the Australian Vaccination-skeptics Network. In 2018 it voluntarily changed its name to the Australian Vaccination-risks Network.

[5] In 2014, SAVN changed its name to Stop the Australian (Anti-)Vaccination Network.

[6] Brian Martin and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

[7] Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010,

[8] Brian Martin, “Debating vaccination: understanding the attack on the Australian Vaccination Network,” Living Wisdom, no. 8, 2011, pp. 14–40, at pp. 28–30.

[9] E.g., Paul Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (New York: Norton, 1985).

[10] On Wikipedia I am categorised as an “anti-vaccination activist,” a term that is not defined on the entry listing those in the category. See Brian Martin, “Persistent bias on Wikipedia: methods and responses,” Social Science Computer Review, Vol. 36, No. 3, June 2018, pp. 379–388.

[11] See for example David Collier, Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

[12] Brian Martin. “Caught in the vaccination wars (part 3)”, 23 October 2012,

[13] The only possible exception to this statement is Michael Brull, “Anti-vaccination cranks versus academic freedom,” New Matilda, 7 February 2016, who reproduced my own summary of the key points in the thesis relevant to Australian government vaccination policy. For my responses to the attack, see – Wilyman, for example “Defending university integrity,” International Journal for Educational Integrity, Vol. 13, No. 1, 2017, pp. 1–14.

[14] Brian Martin, Vaccination Panic in Australia (Sparsnäs, Sweden: Irene Publishing, 2018), pp. 15–24.

[15] E.g., Stuart Blume, Immunization: How Vaccines Became Controversial (London: Reaktion Books, 2017).

[16] Brian Martin. “Caught in the vaccination wars”, 28 April 2011,

[17] For own commentary on Wakefield, see “On the suppression of vaccination dissent,” Science & Engineering Ethics, Vol. 21, No. 1, 2015, pp. 143–157.

[18] Brian Martin. Evidence-based campaigning. Archives of Public Health, Vol. 76, article 54, 2018,

[19] Patrick Stokes, “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012,

[20] Martin, Vaccination Panic in Australia, 292–304.

[21] W. Lawrence Neuman, Social Research Methods: Qualitative and Quantitative Approaches, seventh edition (Boston, MA: Pearson, 2011), 3–5.

[22] Wayne C. Booth, Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition (Chicago: University of Chicago Press, 2016), 272–273.

[23] Malcolm Wright and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008, 125–132.

[24] For a detailed articulation of this approach, see Stephen Hilgartner, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522. Hilgartner gives numerous citations to expansive interpretations of the significance of the hoax.

[25] See for example Alan D. Sokal, “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22, at p. 11: “From the mere fact of publication of my parody, I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or the cultural studies of science — much less the sociology of science — is nonsense.”

Author Information: Nuria Anaya-Reig, Universidad Rey Juan Carlos,

Anaya-Reig, Nuria. “Teorías Implícitas del Investigador: Un Campo por Explorar Desde la Psicología de la Ciencia.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 36-41.

The pdf of the article gives specific page references. Shortlink:

Image by Joan via Flickr / Creative Commons


This article is a Spanish-language version of Nuria Anaya-Reig’s earlier contribution, written by the author herself:

Anaya-Reig, Nuria. “Implicit Theories Influencing Researchers: A Field for the Psychology of Science to Explore.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 25-30.

¿Qué concepciones tienen los investigadores sobre las características que debe reunir un estudiante para ser considerado un potencial buen científico? ¿En qué medida influyen esas creencias en la selección de candidatos? Estas son las preguntas fundamentales que laten en el trabajo de Caitlin Donahue Wylie (2018). Mediante un estudio cualitativo de tipo etnográfico, se entrevista a dos profesores de ingeniería en calidad de investigadores principales (IP) y a estudiantes de sendos grupos de doctorado, la mayoría graduados, como investigadores noveles. En total, la muestra es de 27 personas.

Los resultados apuntan a que, entre este tipo de investigadores, es común creer que el interés, la asertividad y el entusiasmo por lo que se estudia son indicadores de un futuro buen investigador. Además, los entrevistados consideran que el entusiasmo está relacionado con el deseo de aprender y la ética en el trabajo. Finalmente, se sugiere una posible exclusión no intencional en la selección de investigadores a causa de la aplicación involuntaria de sesgos por parte del IP, relativa a la preferencia de características propias de grupos mayoritarios (tales como etnia, religión o sexo), y se proponen algunas ideas para ayudar a minimizarlos.

Teorías Implícitas en los Sótanos de la Investigación

En esencia, el trabajo de Wylie (2018) muestra que el proceso de selección de nuevos investigadores por parte de científicos experimentados se basa en teorías implícitas. Quizás a simple vista puede parecer una aportación modesta, pero la médula del trabajo es sustanciosa y no carece de interés para la Psicología de la Ciencia, al menos por tres razones.

Para empezar, porque estudiar tales cuestiones constituye otra forma de aproximarse a la compresión de la psique científica desde un ángulo distinto, ya que estudiar la psicología del científico es uno de los ámbitos de estudio centrales de esta subdisciplina (Feist 2006). En segundo término, porque, aunque la pregunta de investigación se ocupa de una cuestión bien conocida por la Psicología social y, en consecuencia, aunque los resultados del estudio sean bastante previsibles, no dejan de ser nuevos datos y, por tanto, valiosos, que enriquecen el conocimiento teórico sobre las ideas implícitas: es básico en ciencia, y propio del razonamiento científico, diferenciar teorías de pruebas (Feist 2006).

En último lugar, porque la Psicología de la Ciencia, en su vertiente aplicada, no puede ignorar el hecho de que las creencias implícitas de los científicos, si son erróneas, pueden tener su consiguiente reflejo negativo en la población de investigadores actual y futura (Wylie 2018).

Ya Santiago Ramón y Cajal, en su faceta como psicólogo de la ciencia (Anaya-Reig and Romo 2017), reflexionaba sobre este asunto hace más de un siglo. En el capítulo IX, “El investigador como maestro”, de su obra Reglas y consejos sobre investigación científica (1920) apuntaba:

¿Qué signos denuncian el talento creador y la vocación inquebrantable por la indagación científica?

Problema grave, capitalísimo, sobre el cual han discurrido altos pensadores e insignes pedagogos, sin llegar a normas definitivas. La dificultad sube de punto considerando que no basta encontrar entendimientos perspicaces y aptos para las pesquisas de laboratorio sino conquistarlos definitivamente para el culto de la verdad original.

Los futuros sabios, blanco de nuestros desvelos educadores, ¿se encuentran por ventura entre los discípulos más serios y aplicados, acaparadores de premios y triunfadores en oposiciones?

Algunas veces, sí, pero no siempre. Si la regla fuera infalible, fácil resultara la tarea del profesor, bastaríale dirigirse a los premios extraordinarios de la licenciatura y a los números primeros de las oposiciones a cátedras. Mas la realidad se complace a menudo en burlar previsiones y malograr esperanzas. (Ramón y Cajal 1920, 221-222)

A Vueltas con las Teorías Implícitas

Recordemos brevemente que las teorías ingenuas o implícitas son creencias estables y organizadas que las personas hemos elaborado intuitivamente, sin el rigor del método científico. La mayoría de las veces se accede a su contenido con mucha dificultad, ya que la gente desconoce que las tiene, de ahí su nombre. Este hecho no solo dificulta una modificación del pensamiento, sino que lleva a buscar datos que confirmen lo que se piensa, es decir, a cometer sesgos confirmatorios (Romo 1997).

Las personas vamos identificando y organizando las regularidades del entorno gracias al aprendizaje implícito o incidental, basado en el aprendizaje asociativo, pues necesitamos adaptarnos a las distintas situaciones a las que nos enfrentamos. Elaboramos teorías ingenuas que nos ayuden a comprender, anticipar y manejar de la mejor manera posible las variadas circunstancias que nos rodean. Vivimos rodeados de una cantidad de información tan abrumadora, que elaborar teorías implícitas, aprendiendo qué elementos tienden a presentarse juntos, constituye una forma muy eficaz de hacer el mundo mucho más predecible y controlable, lo que, naturalmente, incluye el comportamiento humano.

De hecho, el contenido de las teorías implícitas es fundamentalmente de naturaleza social (Wegner and Vallacher 1977), como muestra el hecho de que buena parte de ellas pueden agruparse dentro las llamadas Teorías Implícitas de la Personalidad (TIP), categoría a la que, por cierto, bien pueden adscribirse las creencias de los investigadores que nos ocupan.

Las TIP se llaman así porque su contenido versa básicamente sobre cualidades personales o rasgos de personalidad y son, por definición, idiosincráticas, si bien suele existir cierta coincidencia entre los miembros de un mismo grupo social.

Entendidas de modo amplio, pueden definirse como aquellas creencias que cada persona tiene sobre el ser humano en general; por ejemplo, pensar que el hombre es bueno por naturaleza o todo lo contrario. En su acepción específica, las TIP se refieren a las creencias que tenemos sobre las características personales que suelen presentarse juntas en gente concreta. Por ejemplo, con frecuencia presuponemos que un escritor tiene que ser una persona culta, sensible y bohemia (Moya 1996).

Conviene notar también que las teorías implícitas se caracterizan frente a las científicas por ser incoherentes y específicas, por basarse en una causalidad lineal y simple, por componerse de ideas habitualmente poco interconectadas, por buscar solo la verificación y la utilidad. Sin embargo, no tienen por qué ser necesariamente erróneas ni inservibles (Pozo, Rey, Sanz and Limón 1992). Aunque las teorías implícitas tengan una capacidad explicativa limitada, sí tienen capacidad descriptiva y predictiva (Pozo Municio 1996).

Algunas Reflexiones Sobre el Tema

Científicos guiándose por intuiciones, ¿cómo es posible? Pero, ¿por qué no? ¿Por qué los investigadores habrían de comportarse de un modo distinto al de otras personas en los procesos de selección? Se comportan como lo hacemos todos habitualmente en nuestra vida cotidiana con respecto a los más variados asuntos. Otra manera de proceder resultaría para cualquiera no solo poco rentable, en términos cognitivos, sino costoso y agotador.

A fin de cuentas, los investigadores, por muy científicos que sean, no dejan de ser personas y, como tales, buscan intuitivamente respuestas a problemas que, si bien condicionan de modo determinante los resultados de su labor, no son el objeto en sí mismo de su trabajo.

Por otra parte, tampoco debe sorprender que diferentes investigadores, poco o muy experimentados, compartan idénticas creencias, especialmente si pertenecen al mismo ámbito, pues, según se ha apuntado, aunque las teorías implícitas se manifiestan en opiniones o expectativas personales, parte de su contenido tácito es compartido por numerosas personas (Runco 2011).

Todo esto lleva, a su vez, a hacer algunas otras observaciones sobre el trabajo de Wylie (2018). En primer lugar, tratándose de teorías implícitas, más que sugerir que los investigadores pueden estar guiando su selección por un sesgo perceptivo, habría que afirmarlo. Como se ha apuntado, las teorías implícitas operan con sesgos confirmatorios que, de hecho, van robusteciendo sus contenidos.

Otra cuestión es preguntarse con qué guarda relación dicho sesgo: Wylie (2018) sugiere que está relacionado con una posible preferencia por las características propias de los grupos mayoritarios a los que pertenecen los IP basándose en algunos estudios que han mostrado que en ciencia e ingeniería predominan hombres, de raza blanca y de clase media, lo que puede contribuir a recibir mal a aquellos estudiantes que no se ajusten a estos estándares o que incluso ellos mismos abandonen por no sentirse cómodos.

Sin duda, esa es una posible interpretación; pero otra es que el sesgo confirmatorio que muestran estos ingenieros podría deberse a que han observado esos rasgos las personas que han  llegado a ser buenas en su disciplina, en lugar de estar relacionado con su preferencia por interactuar con personas que se parecen física o culturalmente a ellos.

Es oportuno señalar aquí nuevamente que las teorías implícitas no tienen por qué ser necesariamente erróneas, ni inservibles (Pozo, Rey, Sanz and Limón 1992). Es lo que ocurre con parte de las creencias que muestra este grupo de investigadores: ¿acaso los científicos, en especial los mejores, no son apasionados de su trabajo?, ¿no dedican muchas horas y mucho esfuerzo a sacarlo adelante?, ¿no son asertivos? La investigación ha establecido firmemente (Romo 2008) que todos los científicos creativos muestran sin excepción altas dosis de motivación intrínseca por la labor que realizan.

Del mismo modo, desde Hayes (1981) sabemos que se precisa una media de 10 años para dominar una disciplina y lograr algo extraordinario. También se ha observado que muestran una gran autoconfianza y que son espacialmente arrogantes y hostiles. Es más, se sabe que los científicos, en comparación con los no científicos, no solo son más asertivos, sino más dominantes, más seguros de sí mismos, más autónomos e incluso más hostiles (Feist 2006). Varios trabajos, por ejemplo, el de Feist y Gorman (1998), han concluido que existen diferencias en los rasgos de personalidad entre científicos y no científicos.

Pero, por otro lado, esto tampoco significa que las concepciones implícitas de la gente sean necesariamente acertadas. De hecho, muchas veces son erróneas. Un buen ejemplo de ello es la creencia que guía a los investigadores principales estudiados por Wylie para seleccionar a los graduados en relación con sus calificaciones académicas. Aunque dicen que las notas son un indicador insuficiente, a continuación matizan su afirmación: “They believe students’ demonstrated willingness to learn is more important, though they also want students who are ‘bright’ and achieve some ‘academic success.’” (2018, 4).

Sin embargo, la evidencia empírica muestra que ni las puntuaciones altas en grados ni en pruebas de aptitud predicen necesariamente el éxito en carreras científicas (Feist 2006) y que el genio creativo no está tampoco necesariamente asociado con el rendimiento escolar extraordinario y, lo que es más, numerosos genios han sido estudiantes mediocres (Simonton 2006).


La Psicología de la Ciencia va acumulando datos para orientar en la selección de posibles buenos investigadores a los científicos interesados: véanse, por ejemplo, Feist (2006) o Anaya-Reig (2018). Pero, ciertamente, a nivel práctico, estos conocimientos serán poco útiles si aquellos que más partido pueden sacarles siguen anclados a creencias que pueden ser erróneas.

Por tanto, resulta de interés seguir explorando las teorías implícitas de los investigadores en sus diferentes disciplinas. Su explicitación es imprescindible como paso inicial, tanto para la Psicología de la Ciencia si pretende que ese conocimiento cierto acumulado tenga repercusiones reales en los laboratorios y otros centros de investigación, como para aquellos científicos que deseen adquirir un conocimiento riguroso sobre las cualidades propias del buen investigador.

Todo ello teniendo muy presente que la naturaleza implícita de las creencias personales dificulta el proceso, porque, como se ha señalado, supone que el sujeto entrevistado desconoce a menudo que las posee (Pozo, Rey, Sanz and Limón 1992), y que su modificación requiere, además, un cambio de naturaleza conceptual o representacional (Pozo, Scheuer, Mateos Sanz and Pérez Echeverría 2006).

Por último, tal vez no sea razonable promover entre todos los universitarios de manera general ciertas habilidades, sin tener en consideración que reúnen determinados atributos. Por obvio que sea, hay que recordar que los recursos educativos, como los de cualquier tipo, son necesariamente limitados. Si, además, sabemos que solo un 2% de las personas se dedican a la ciencia (Feist 2006), quizás valga más la pena poner el esfuerzo en mejorar la capacidad de identificar con tino a aquellos que potencialmente son válidos. Otra cosa sería como tratar de entrenar para cantar ópera a una persona que no tiene cualidades vocales en absoluto.

Contact details:


Anaya-Reig, N. 2018. “Cajal: Key Psychological Factors in the Self-Construction of a Genius.” Social Epistemology. doi: 10.1080/02691728.2018.1522555.

Anaya-Reig, N., and M. Romo. 2017. “Cajal, Psychologist of Science.” The Spanish Journal of Psychology 20: e69. doi: 10.1017/sjp.2017.71.

Feist, G. J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.

Feist, G. J., and M. E. Gorman. 1998. “The Psychology of Science: Review and Integration of a Nascent Discipline.” Review of General Psychology 2 (1): 3–47. doi: 10.1037/1089-2680.2.1.3.

Hayes, J. R. 1981. The Complete Problem Solver. Philadelphia, PA: Franklin Institute Press.

Moya, M. 1996. “Percepción social y personas.” In Psicología social, 93-119. Madrid, Spain: McGraw-Hill.

Pozo Municio, J. I. 1996. Aprendices y maestros. La nueva cultura del aprendizaje. Madrid, Spain: Alianza.

Pozo, J. I., M. P. Rey, A. Sanz, and M. Limón. 1992. “Las ideas de los alumnos sobre la ciencia como teorías implícitas.” Infancia y Aprendizaje 57: 3-22.

Pozo, J. I., N. Scheuer, M. M. Mateos Sanz, and M. P. Pérez Echeverría. 2006. “Las teorías implícitas sobre el aprendizaje y la enseñanza.” In Nuevas formas de pensar la enseñanza y el aprendizaje: las concepciones de profesores y alumnos, 95-134. Barcelona, Spain: Graó.

Ramón y Cajal, S. 1920. Reglas y consejos sobre investigación científica. (Los tónicos de la voluntad). 5th ed. Madrid, Spain: Nicolás Moya.

Ramón y Cajal, S. 1999. Advice for a Young Investigator, translated by N. Swanson and L. W. Swanson. Cambridge, MA: The MIT Press.

Romo, M. 1997. Psicología de la creatividad. Barcelona, Spain: Paidós.

Romo, M. 2008. Epistemología y Psicología. Madrid, Spain: Pirámide.

Runco, M. 2011. “Implicit theories.” In Encyclopaedia of Creativity, edited by M. Runco and S. R. Pritzker, 644-646. 2nd ed. Elsevier.

Simonton, D. K. 2006. “Creative genius, Knowledge, and Reason. The Lives and Works of Eminents Creators.” In Creativity and reason in cognitive development, edited by J. C. Kaufman and J. Baer, 43-59. New York, NY: Cambridge University Press.

Wegner, D. M., and R. R, Vallacher. 1977. Implicit Psychology. An introduction to Social Cognition. New York, NY: Oxford University Press.

Wylie, C. D. 2018. “‘I Just Love Research’: Beliefs About What Makes Researchers Successful.” Social Epistemology 32 (4): 262-271, doi: 10.1080/02691728.2018.1458349.

Author Information: Moti Mizrahi, Florida Institute of Technology,

Mizrahi, Moti. “Why Scientific Knowledge Is Still the Best.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 18-32.

The pdf of the article gives specific page references. Shortlink:

For context, see also:

Image by Specious Reasons via Flickr / Creative Commons


It is common knowledge among scholars and researchers that the norms of academic research dictate that one must enter an academic conversation by properly acknowledging, citing, and engaging with the work done by other scholars and researchers in the field, thereby showing that a larger conversation is taking place.[1] See, for example, Graff and Birkenstein (2018, 1-18) on “entering the conversation.” Properly “entering the conversation” is especially important when one aims to criticize the work done by other scholars and researchers in the field.

In my previous reply to Bernard Wills’ attack on Weak Scientism (Wills 2018a), I point out that Wills fails in his job as a scholar who aims to criticize work done by other scholars and researchers in the field (Mizrahi 2018b, 41), since Wills does not cite or engage with the paper in which I defend Weak Scientism originally (Mizrahi 2017a), the very thesis he seeks to attack. Moreover, he does not cite or engage with the papers in my exchange with Christopher Brown (Mizrahi 2017b; 2018a), not to mention other works in the literature on scientism.

In his latest attack, even though he claims to be a practitioner of “close reading” (Wills 2018b, 34), it appears that Wills still has not bothered to read the paper in which I defend the thesis he seeks to attack (Mizrahi 2017a), or any of the papers in my exchange with Brown (Mizrahi 2017b; 2018a), as evidenced by the fact that he does not cite them at all. To me, these are not only signs of lazy scholarship but also an indication that Wills has no interest in engaging with my arguments for Weak Scientism in good faith. For these reasons, this will be my second and final response to Wills. I have neither the time nor the patience to debate lazy scholars who argue in bad faith.

On the Quantitative Superiority of Scientific Knowledge

In response to my empirical data on the superiority of scientific knowledge over non-scientific knowledge in terms of research output and research impact (Mizrahi 2017a, 357-359; Mizrahi 2018a, 20-22; Mizrahi 2018b, 42-44), Wills (2018b, 34) claims that he has “no firm opinion at all as to whether the totality of the sciences have produced more ‘stuff’ than the totality of the humanities between 1997 and 2017 and the reason is that I simply don’t care.”

I would like to make a few points in reply. First, the sciences produce more published research, not just “stuff.” Wills’ use of the non-count noun ‘stuff’ is misleading because it suggests that research output cannot be counted or measured. However, research output (as well as research impact) can be counted and measured, which is why we can use this measure to determine that scientific research (or knowledge) is better than non-scientific research (or knowledge).

Second, my defense of Weak Scientism consists of a quantitative argument and a qualitative argument, thereby showing that scientific knowledge is superior to non-scientific knowledge both quantitatively and qualitatively, which are the two ways in which one thing can be said to be better than another (Mizrahi 2017a, 354). If Wills really does not care about the quantitative argument for Weak Scientism, as he claims, then why is he attacking my defense of Weak Scientism at all?

After all, showing that “scientific knowledge is [quantitatively] better – in terms of research output (i.e. more publications) and research impact (i.e. more citations) – than non-scientific knowledge” is an integral part of my defense of Weak Scientism (Mizrahi 2017a, 358). To know that, however, Wills would have to read the paper in which I make these arguments for Weak Scientism (Mizrahi 2017a). In his (2018a) and (2018b), I see no evidence that Wills has read, let alone read closely, that paper.

Third, for someone who says that he “simply [doesn’t] care” about quantity (Wills 2018b, 34), Wills sure talks about it a lot. For example, Wills claims that a “German professor once told [him] that in the first half of the 20th Century there were 40,000 monographs on Franz Kafka alone!” (Wills 2018a, 18) and that “Shakespeare scholars have all of us beat” (Wills 2018a, 18). Wills’ unsupported claims about quantity turn out to be false, of course, as I show in my previous reply (Mizrahi 2018b, 42-44). Readers will notice that Wills does not even try to defend those claims in his (2018b).

Fourth, whether Wills cares about quantity or has opinions on the matter is completely beside the point. With all due respect, Wills’ opinions about research output in academic disciplines are worthless, especially when we have data on research output in scientific and non-scientific disciplines. The data show that scientific disciplines produce more research than non-scientific disciplines and that scientific research has a greater impact than non-scientific research (Mizrahi 2017a, 357-359; Mizrahi 2018a, 20-22; Mizrahi 2018b, 42-44).

Wills (2018b, 35) thinks that the following is a problem for Weak Scientism: “what if it were true that Shakespeare scholars produced more papers than physicists?” (original emphasis) Lacking in good arguments, as in his previous attack on Weak Scientism, Wills resorts to making baseless accusations and insults, calling me “an odd man” for thinking that literature would be better than physics in his hypothetical scenario (Wills 2018b, 35). But this is not a problem for Weak Scientism at all and there is nothing “odd” about it.

What Wills fails to understand is that Weak Scientism is not supposed to be a necessary truth. That is, Weak Scientism does not state that scientific knowledge must be quantitatively and qualitatively better than non-scientific knowledge. Rather, Weak Scientism is a contingent fact about the state of academic research. As a matter of fact, scientific disciplines produce better research than non-scientific disciplines do.

Moreover, the data we have (Mizrahi 2017a, 357-359; Mizrahi 2018a, 20-22; Mizrahi 2018b, 42-44) give us no reason to think that these trends in research output and research impact are likely to change any time soon. Of course, if Wills had read my original defense of Weak Scientism (Mizrahi 2017a), and my replies to Brown, he would have known that I have discussed all of this already (Mizrahi 2017b, 9-10; 2018a, 9-13).

Likewise, contrary to what Wills (2018b, 36, footnote 2) seems to think, there is nothing odd about arguing for a thesis according to which academic research produced by scientific disciplines is superior to academic research produced by non-scientific disciplines, “while leaving open the question whether non-scientific knowledge outside the academy may be superior to science” (original emphasis). If Wills were familiar with the literature on scientism, he would have been aware of the common distinction between “internal scientism” and “external scientism.”

See, for example, Stenmark’s (1997, 16-18) distinction between “academic-internal scientism” and “academic-external scientism” as well as Peels (2018, 28-56) on the difference between “academic scientism” and “universal scientism.” Again, a serious scholar would have made sure that he or she is thoroughly familiar with the relevant literature before attacking a research paper that aims to make a contribution to that literature (Graff and Birkenstein 2018, 1-18).

Wills also seems to be unaware of the fact that my quantitative argument for Weak Scientism consists of two parts: (a) showing that scientific research output is greater than non-scientific research output, and (b) showing that the research impact of scientific research is greater than that of non-scientific research (Mizrahi 2017a, 356-358). The latter is measured, not just by publications, but also by citations. Wills does not address this point about research impact in his attacks on Weak Scientism. Since he seems to be proud of his publication record, for he tells me I should search for his published papers on Google (Wills 2018b, 35), let me to illustrate this point about research impact by comparing Wills’ publication record to a colleague of his from a science department at his university.

According to Google Scholar, since completing his doctorate in Religious Studies at McMaster University in 2003, Wills has published ten research articles (excluding book reviews). One of his research articles was cited three times, and three of his research articles were cited one time each. That is six citations in total.

On the other hand, his colleague from the Physics program at Memorial University, Dr. Svetlana Barkanova, has published 23 research articles between 2003 and 2018, and those articles were cited 53 times. Clearly, in the same time, a physicist at Wills’ university has produced more research than he did (130% more research), and her research has had a greater impact than his (783% more impact). As I have argued in my (2017a), this is generally the case when research produced by scientific disciplines is compared to research produced by non-scientific disciplines (Table 1).

Table 1. H Index by subject area, 1999-2018 (Source: Scimago Journal & Country Rank)

H Index
Physics 927
Psychology 682
Philosophy 161
Literature 67

Reflecting on One’s Own Knowledge

In his first attack on Weak Scientism, Wills (2018a, 23) claims that one “can produce a potential infinity of knowledge simply by reflecting recursively on the fact of [one’s] own existence.” In response, I pointed out that Wills (2018a, 23) himself admits that this reflexive procedure applies to “ANY fact” (original capitalization), which means that it makes no difference in terms of the quantity of knowledge produced in scientific versus non-scientific disciplines.

As I have come to expect from him, Wills (2018b, 35) resorts to name-calling again, rather than giving good arguments, calling my response “sophism,” but he seems to miss the basic logical point, even though he admits again that extending one’s knowledge by reflexive self-reflection “can be done with any proposition at all” (Wills 2018b, 35). Of course, if “it can be done with any proposition at all” (Wills 2018b, 35; emphasis added), then it can be done with scientific propositions as well, for the set of all propositions includes scientific propositions.

To illustrate, suppose that a scientist knows that p and a non-scientist knows that q. Quantitatively, the amount of scientific and non-scientific knowledge is equal in this instance (1 = 1). Now the scientist reflects on her own knowledge that p and comes to know that she knows that p, i.e., she knows that Kp. Similarly, the non-scientist reflects on her knowledge that q and comes to know that she knows that q, i.e., she knows that Kq. Notice that, quantitatively, nothing has changed, i.e., the amount of scientific versus non-scientific knowledge is still equal: two items of scientific knowledge (p and Kp) and two items of non-scientific knowledge (q and Kq).

Wills might be tempted to retort that p may be an item of scientific knowledge but Kp is not because it is not knowledge that is produced by scientific procedures. However, if Wills were to retort in this way, then it would be another indication of sloppy scholarship on his part. In my original paper (Mizrahi 2017a, 356), and in my replies to Brown (Mizrahi 2017b, 12-14; Mizrahi 2018a, 14-15), I discuss at great length my characterization of disciplinary knowledge as knowledge produced by practitioners in the field. I will not repeat those arguments here.

Baseless Accusations of Racism and Colonialism

After raising questions about whether I am merely rationalizing my “privilege” (Wills 2018a, 19), Wills now says that his baseless accusations of racism and colonialism are “not personal” (Wills 2018b, 35). His concern, Wills (2018b, 35) claims, is “systemic racism” (original emphasis). As a white man, Wills has the chutzpah to explain (or white-mansplain, if you will) to me, an immigrant from the Middle East, racism and colonialism.

My people were the victims of ethnic cleansing and genocide, lived under British colonial rule, and are still a persecuted minority group. Since some of my ancestors died fighting the British mandate, I do not appreciate using the term ‘colonialism’ to describe academic disputes that are trifle in comparison to the atrocities brought about by racism and colonialism.

Perhaps Wills should have used (or meant to use) the term ‘imperialism’, since it is sometimes used to describe the expansion of a scientific theory into new domains (Dupré 1994). This is another sign of Wills’ lack of familiarity with the literature on scientism. Be that as it may, Wills continues to assert without argument that my “defense of weak-scientism is ideologically loaded,” that it implies “the exclusion of various others such as women or indigenous peoples from the socially sanctioned circle of knowers,” and that I make “hegemonic claims for science from which [I] stand to benefit” (Wills 2018b, 36).

In response, I must admit that I have no idea what sort of “ideologies” Weak Scientism is supposed to be loaded with, since Wills does not say what those are. Wills (2018b, 36) asserts without argument that “the position [I] take on scientism has social, political and monetary implications,” but he does not specify those implications. Nor does he show how social and political implications (whatever those are) are supposed to follow from the epistemic thesis of Weak Scientism (Mizrahi 2017a, 353). I am also not sure why Wills thinks that Weak Scientism implies “the exclusion of various others such as women or indigenous peoples from the socially sanctioned circle of knowers” (Wills 2018b, 36), since he provides no arguments for these assertions.

Of course, Weak Scientism entails that there is non-scientific knowledge (Mizrahi 2018b, 41). If there is non-scientific knowledge, then there are non-scientific knowers. In that case, on Weak Scientism, non-scientists are not excluded from “the circle of knowers.” In other words, on Weak Scientism, the circle of knowers includes non-scientists, which can be women and people of color, of course (recall Dr. Svetlana Barkanova). Contrary to what Wills seems to think, then, Weak Scientism cannot possibly entail “the exclusion of various others such as women or indigenous peoples from the socially sanctioned circle of knowers” (Wills 2018b, 36).

In fact, if it is “the exclusion of various others” that Wills (2018b, 36) is genuinely concerned about, then he is undoubtedly aware of the fact that it is precisely white men like him who are guilty of systematically excluding “various others,” such as women (Paxton et al. 2012) and people of color (Botts et al. 2014), from the academic discipline of philosophy (American Philosophical Association 2014). As anyone who is familiar with the academic discipline of philosophy knows, “philosophy faces a serious diversity problem” (Van Norden 2017b, 5). As Amy Ferrer (2012), Executive Director of the American Philosophical Association (APA), put it on Brian Leiter’s blog, Leiter Reports:

philosophy is one of the least diverse humanities fields, and indeed one of the least diverse fields in all of academia, in terms of gender, race, and ethnicity. Philosophy has a reputation for not only a lack of diversity but also an often hostile climate for women and minorities (emphasis added).

In light of the lack of diversity in academic philosophy, some have gone as far as arguing that contemporary philosophy is racist and xenophobic; otherwise, argues Bryan Van Norden (2017a), it is difficult to explain “the fact that the rich philosophical traditions of China, India, Africa, and the Indigenous peoples of the Americas are completely ignored by almost all philosophy departments in both Europe and the English-speaking world.”

In fact, Wills’ attacks on Weak Scientism illustrate how white men like him attempt to keep philosophy white and “foreigner-free” (Cherry and Schwitzgebel 2016). They do so by citing and discussing the so-called “greats,” which are almost exclusively Western men. Citations are rather scarce in Wills’ replies, but when he cites, he only cites “the greats,” like Aristotle and Augustine (see Schwitzgebel et al. 2018 on the “Insularity of Anglophone Philosophy”).

As for his claim that I “stand to benefit” (Wills 2018b, 36) from my defense of Weak Scientism, I have no idea what Wills is talking about. I had no idea that History and Philosophy of Science (HPS) and Science and Technology Studies (STS) “can often assert hegemony over other discourses” (Wills 2018b, 36). I bet this will come as a surprise to other HPS and STS scholars and researchers. They will probably be shocked to learn that they have that kind of power over other academic disciplines.

More importantly, even if it were true that I “stand to benefit” (Wills 2018b, 36) from my defense of Weak Scientism, nothing about the merit of my defense of Weak Scientism would follow from that. That is, to argue that Weak Scientism must be false because I stand to benefit from it being true is to argue fallaciously. In particular, it is an informal fallacy of the circumstantial ad hominem type known as “poisoning the well,” which “alleges that the person has a hidden agenda or something to gain and is therefore not an honest or objective arguer” (Walton and Krabbe 1995, 111).

It is as fallacious as arguing that climate change is not real because climate scientists stand to benefit from climate research or that MMR vaccines are not safe (e.g., cause autism) because medical researchers stand to benefit from such vaccines (Offit 2008, 213-214). These are the sort of fallacious arguments that are typically made by those who are ignorant of the relevant science or are arguing in bad faith.

In fact, the same sort of fallacious reasoning can be used to attack any scholar or researcher in any field of inquiry whatsoever, including Wills. For instance, just as my standing to benefit from defending Weak Scientism is supposed to be a reason to believe that Weak Scientism is false, or Paul Offit’s standing to gain from MMR vaccines is supposed to be a reason to believe that such vaccines are not safe, Wills’ standing to benefit from his attacks on Weak Scientism (e.g., by protecting his position as a Humanities professor) would be a reason to believe that his attacks on Weak Scientism are flawed.

Indeed, the administrators at Wills’ university would have a reason to dismiss his argument for a pay raise on the grounds that he stands to benefit from it (Van Vleet 2011, 16). Of course, such reasoning is fallacious no matter who is the target. Either MMR vaccines are safe and effective or they are not regardless of whether Offit stands to benefit from them. Climate change is real whether climate scientists stand to benefit from doing climate research. Likewise, Weak Scientism is true or false whether or not I stand to benefit from defending it.

Image by Maia Valenzuela via Flickr / Creative Commons


Revisiting the Joyce Scholar

Wills (2018b, 36) returns to his example of the Joyce scholar as an example of non-scientific knowledge “that come[s] from an academic context.” As I have already pointed out in my previous reply (Mizrahi 2018b, 41-42), it appears that Wills fails to grasp the difference between Strong Scientism and Weak Scientism. Only Strong Scientism rules out knowledge that is not scientific. On Weak Scientism, there is both scientific and non-scientific knowledge. Consequently, examples of non-scientific knowledge from academic disciplines other than scientific ones do not constitute evidence against Weak Scientism.

Relatedly, Wills claims to have demonstrated that I vacillate between Strong Scientism and Weak Scientism and cites page 22 of his previous attack (Wills 2018a, 22). Here is how Wills (2018a, 22) argues that I vacillate between Strong Scientism and Weak Scientism:

Perhaps it is the awareness of such difficulties that leads Mizhari [sic] to his stance of ‘Weak Scientism’. It is not a stance he himself entirely sticks to. Some of his statements imply the strong version of scientism as when he tells us the [sic] knowledge is “the scholarly work or research produced in scientific fields of study, such as the natural sciences, as opposed to non-scientific fields, such as the humanities” [Mizrahi 2018a, 22].

However, the full passage Wills cites as evidence of my vacillation between Strong Scientism and Weak Scientism is from the conclusion of my second reply to Brown (Mizrahi 2018a) and it reads as follows:

At this point, I think it is quite clear that Brown and I are talking past each other on a couple of levels. First, I follow scientists (e.g., Weinberg 1994, 166-190) and philosophers (e.g., Haack 2007, 17-18 and Peels 2016, 2462) on both sides of the scientism debate in treating philosophy as an academic discipline or field of study, whereas Brown (2017b, 18) insists on thinking about philosophy as a personal activity of “individual intellectual progress.” Second, I follow scientists (e.g., Hawking and Mlodinow 2010, 5) and philosophers (e.g., Kidd 2016, 12-13 and Rosenberg 2011, 307) on both sides of the scientism debate in thinking about knowledge as the scholarly work or research produced in scientific fields of study, such as the natural sciences, as opposed to non-scientific fields of study, such as the humanities, whereas Brown insists on thinking about philosophical knowledge as personal knowledge.

Clearly, in this passage, I am talking about how ‘knowledge’ is understood in the scientism debate, specifically, that knowledge is the published research or scholarship produced by practitioners in academic disciplines (see also Mizrahi 2017a, 353). I am not saying that non-scientific disciplines do not produce knowledge. How anyone can interpret this passage as evidence of vacillation between Strong Scientism and Weak Scientism is truly beyond me. To me, this amounts to “contextomy” (McGlone 2005), and thus further evidence of arguing in bad faith on Wills’ part.

Wills also misunderstands, as in his previous attack on Weak Scientism, the epistemic properties of unity, coherence, simplicity, and testability, and their role in the context of hypothesis testing and theory choice. For he seems to think that “a masterful exposition of Portrait of the Artist as Young Man will show the unity, coherence and simplicity of the work’s design to the extent that these are artistically desired features” (Wills 2018b, 36). Here Wills is equivocating on the meaning of the terms ‘unity’, ‘coherence’, and ‘simplicity’.

There is a difference between the epistemic and the artistic senses of these terms. For example, when it comes to novels, such as A Portrait of the Artist as Young Man, ‘simplicity’ may refer to literary style and language. When it comes to explanations or theories, however, ‘simplicity’ refers to the number of entities posited or assumptions taken for granted (Mizrahi 2016). Clearly, those are two different senses of ‘simplicity’ and Wills is equivocating on the two. As far as Weak Scientism is concerned, it is the epistemic sense of these terms that is of interest to us. Perhaps Wills fails to realize that Weak Scientism is an epistemic thesis because he has not read my (2017a), where I sketch the arguments for this thesis, or at least has not read it carefully enough despite claiming to be a practitioner of “close reading” (Wills 2018b, 34).

When he says that the Joyce scholar “tests [what he says] against the text,” Wills (2018b, 37) reveals his misunderstanding of testability once again. On Wills’ description of the work done by the Joyce scholar, what the Joyce scholar is doing amounts to accommodation, not novel prediction. I have already discussed this point in my previous reply to Wills (Mizrahi 2018b, 47) and I referred him to a paper in which I explain the difference between accommodation and novel prediction (Mizrahi 2012). But it appears that Wills has no interest in reading the works I cite in my replies to his attacks. Perhaps a Stanford Encyclopedia of Philosophy entry on the difference between accommodation and prediction would be more accessible (Barnes 2018).

Wills finds it difficult to see how the work of the Joyce scholar can be improved by drawing on the methods of the sciences. As Wills (2018b, 37) writes, “What in this hermeneutic process would be improved by ‘scientific method’ as Mizrahi describes it? Where does the Joyce scholar need to draw testable consequences from a novel hypothesis and test it with an experiment?” (original emphasis)

Because he sees no way the work of the Joyce scholar can benefit from the application of scientific methodologies, Wills thinks it follows that I have no choice but to say that the work of the Joyce scholar does not count as knowledge. As Wills (2018b, 37) writes, “It seems to me that only option for Mizrahi here is to deny that the Joyce scholar knows anything (beyond the bare factual information) and this means, alas, that his position once again collapses into strong scientism.”

It should be clear, however, that this is a non sequitur. Even if it is true that scientific methodologies are of no use to the Joyce scholar, it does not follow that the work of the Joyce scholar does not count as knowledge. Again, Weak Scientism is the view that scientific knowledge is better than non-scientific knowledge. This means that scientists produce knowledge using scientific methods, whereas non-scientists produce knowledge using non-scientific methods, it’s just that scientists produce better knowledge using scientific methods that are superior to non-scientific methods in terms of the production of knowledge. Non-scientists can use scientific methods to produce knowledge in their fields of inquiry. But even if they do not use scientific methods in their work, on Weak Scientism, the research they produce still counts as knowledge.

Moreover, it is not the case that scientific methodologies are of no use to literary scholars. Apparently, Wills is unaware of the interdisciplinary field in which the methods of computer science and data science are applied to the study of history, literature, and philosophy known as the “Digital Humanities.” Becoming familiar with work in Digital Humanities will help Wills understand what it means to use scientific methods in a literary context. Since I have already discussed all of this in my original paper (Mizrahi 2017a) and in my replies to Brown (Mizrahi 2017b; 2018a), I take this as another reason to think that Wills has not read those papers (or at least has not read them carefully enough).

To me, this is a sign that he is not interested in engaging with Weak Scientism in good faith, especially since my (2017a) and my replies to Brown are themselves instances of the use of methods from data science in HPS, and since I have cited two additional examples of work I have done with Zoe Ashton that illustrates how philosophy can be improved by the introduction of scientific methods (Ashton and Mizrahi 2018a and 2018b). Again, it appears that Wills did not bother to read (let alone read closely) the works I cite in my replies to his attacks.

Toward the end of his discussion of the Joyce scholar, Wills (2018b, 37) says that using scientific methods “may mean better knowledge in many cases.” If he accepts that using scientific methods “may mean better knowledge in many cases” (Wills 2018b, 37), then Wills thereby accepts Weak Scientism as well. For to say that using scientific methods “may mean better knowledge in many cases” (Wills 2018b, 37) is to say that scientific knowledge is generally better than non-scientific knowledge.

Of course, there are instances of bad science, just as there are instances of bad scholarship in any academic discipline. Generally speaking, however, research done by scientists using the methods of science will likely be better (i.e., quantitatively better in terms of research output and research impact as well as qualitatively better in terms of explanatory, predictive, and instrumental success) than research done by non-scientists using non-scientific methods. That is Weak Scientism and, perhaps unwittingly, Wills seems to have accepted it by granting that using scientific methods “may mean better knowledge in many cases” (Wills 2018b, 37).

Inference to the Best Explanation

In my (2017a), as well as in my replies to Brown (Mizrahi 2017b; 2018a) and to Wills (Mizrahi 2018b), I have argued that Inference to the Best Explanation (IBE) is used in both scientific and non-scientific disciplines. As McCain and Poston (2017, 1) put it:

Explanatory reasoning is quite common. Not only are rigorous inferences to the best explanation (IBE) used pervasively in the sciences, explanatory reasoning is virtually ubiquitous in everyday life. It is not a stretch to say that we implement explanatory reasoning in a way that is “so routine and automatic that it easily goes unnoticed” [Douven 2017].

Once this point is acknowledged, it becomes clear that, when judged by the criteria of good explanations, such as unity, coherence, simplicity, and testability, scientific IBEs are generally better than non-scientific IBEs (Mizrahi 2017a, 360; Mizrahi 2017b, 19-20; Mizrahi 2018a, 17; Mizrahi 2018b, 46-47).

In response, Wills tells the story of his daughter who has attempted to reason abductively in class once. Wills (2018b, 38) begins by saying “Let me go back to my daughter,” even though it is the first time he mentions her in his (2018b), and then goes on to say that she once explained “how Scriabin created [the Prometheus] chord” to the satisfaction of her classmates.

But how is this supposed to be evidence against Weak Scientism? In my (2017a), I discuss how IBE is used in non-scientific disciplines and I even give an example from literature (Mizrahi 2017a, 361). Apparently, Wills is unaware of that, which I take to be another indication that he has not read the paper that defends the thesis he seeks to criticize. Again, to quote Wills (2018b, 38) himself, “All disciplines use abduction,” so to give an example of IBE from a non-scientific discipline does nothing at all to undermine Weak Scientism. According to Weak Scientism, all academic disciplines produce knowledge, and many of them do so by using IBE, it’s just that scientific IBEs are better than non-scientific IBEs.

Wills asserts without argument that, in non-scientific disciplines, there is no need to test explanations even when IBE is used to produce knowledge. As Wills (2018b, 38) writes, “All disciplines use abduction, true, but they do not all arrive at the ‘best explanation’ by the same procedures.” For Wills (2018b, 38), his daughter did not need to test her hypothesis about “how Scriabin created [the Prometheus] chord.” Wills does not tell us what the hypothesis in question actually is, so it is hard to tell whether it is testable or not. To claim that it doesn’t need to be tested, however, even when the argument for it is supposed to be an IBE, would be to misuse or abuse IBE rather than use it.

That is, if one were to reason to the best explanation without judging competing explanations by the criteria of unity, coherence, simplicity, testability, and the like, then one would not be warranted in concluding that one’s explanation is the best among those considered. That is just how IBE works (Psillos 2007). To say that an explanation is the best is to say that, among the competing explanations considered, it is the one that explains the most, leaves out the least, is consistent with background knowledge, is the least complicated, and yields independently testable predictions (Mizrahi 2017a, 360-362).

Wills (2018b, 39) seems to grant that “unity, simplicity and coherence” are good-making properties of explanations, but not testability. But why not testability? Why an explanation must be simple in order to be a good explanation, but not testable? Wills does not say. Again (Mizrahi 2018b, 47), I would urge Wills to consult logic and reasoning textbooks that discuss IBE. In those books, he will find that, in addition to unity, coherence, and simplicity, testability is one of the “characteristics that are necessary conditions for any explanation to qualify as being a reasonable empirical explanation” (Govier 2010, 300).

In other words, IBE is itself the procedure by which knowledge is produced. This procedure consists of “an inference from observations and a comparison between competing hypotheses to the conclusion that one of those hypotheses best explains the observations” (Mizrahi 2018c). For example (Sinnott-Armstrong and Fogelin 2015, 196):

  • Observation: Your lock is broken and your valuables are missing.
  • Explanation: The hypothesis that your house has been burglarized, combined with previously accepted facts and principles, provides a suitably strong explanation of observation 1.
  • Comparison: No other hypothesis provides an explanation nearly as good as that in 2.
  • Conclusion: Your house was burglarized.

As we can see, the procedure itself requires that we compare competing hypotheses. As I have mentioned already, “common standards for assessing explanations” (Sinnott-Armstrong and Fogelin 2015, 195) include unity, coherence, simplicity, and testability. This means that, if the hypothesis one favors as the best explanation for observation 1 cannot be tested, then one would not be justified in concluding that it is the best explanation, and hence probably true. That is simply how IBE works (Psillos 2007).

Contrary to what Wills (2018b, 39) seems to think, those who reason abductively without comparing competing explanations by the criteria of unity, coherence, simplicity, and testability are not using IBE, they are misusing or abusing it (Mizrahi 2017a, 360-361). To reason abductively without testing your competing explanations is as fallacious as reasoning inductively without making sure that your sample is representative of the target population (Govier 2010, 258-262).

Image by Specious Reasons via Flickr / Creative Commons


The Defense Rests

Fallacious reasoning, unfortunately, is what I have come to expect from Wills after reading and replying to his attacks on Weak Scientism. But this is forgivable, of course, given that we all fall prey to mistakes in reasoning on occasion. Even misspelling my last name several times (Wills 2018a, 18, 22, 24) is forgivable, so I accept Wills’ (2018b, 39) apology. What is unforgivable, however, is lazy scholarship and arguing in bad faith. As I have argued above, Wills is guilty of both because, despite claiming to be a practitioner of “close reading” (Wills 2018b, 34), Wills has not read the paper in which I defend the thesis he seeks to attack (Mizrahi 2017a), or any of the papers in my exchange with Brown (Mizrahi 2017b; 2018a), as evidenced by the fact that he does not cite them at all (not to mention citing and engaging with other works on scientism).

This explains why Wills completely misunderstands Weak Scientism and the arguments for the quantitative superiority (in terms of research output and research impact) as well as qualitative superiority (in terms of explanatory, predictive, and instrumental success) of scientific knowledge over non-scientific knowledge. For these reasons, this is my second and final response to Wills. I have neither the time nor the patience to engage with lazy scholarship that was produced in bad faith.

Contact details:


Ashton, Zoe and Moti Mizrahi. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the ‘Received Wisdom’ About Armchair Philosophy.” Erkenntnis 83, no. 3 (2018a): 595-612.

Ashton, Zoe and Moti Mizrahi. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018b): 58-70.

American Philosophical Association. “Minorities in Philosophy.” Data and Information on the Field of Philosophy. Accessed on August 13, 2018.

Barnes, Eric Christian. “Prediction versus Accommodation.” In The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), edited by E. N. Zalta. Accessed on August 14, 2018.

Botts, Tina Fernandes, Liam Kofi Bright, Myisha Cherry, Guntur Mallarangeng, and Quayshawn Spencer. “What Is the State of Blacks in Philosophy?” Critical Philosophy of Race 2, no. 2 (2014): 224-242.

Cherry, Myisha and Eric Schwitzgebel. “Like the Oscars, #PhilosophySoWhite.” Los Angeles Times, March 04, 2016. Accessed on August 13, 2018.

Douven, Igor. “Abduction.” In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta (Summer 2017 Edition). Accessed on August 14, 2018.

Dupré, John. “Against Scientific Imperialism.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994, no. 2 (1994): 374-381.

Ferrer, Amy. “What Can We Do about Diversity?” Leiter Reports: A Philosophy Blog, December 04, 2012. Accessed on August 13, 2018.

Govier, Trudy. A Practical Study of Argument. Seventh Edition. Belmont, CA: Wadsworth, 2010.

Graff, Gerald and Cathy Birkenstein. They Say/I Say: The Moves that Matter in Academic Writing. Fourth Edition. New York: W. W. Norton & Co., 2018.

Haack, Susan. Defending Science–within Reason: Between Scientism and Cynicism. New York: Prometheus Books, 2007.

Hawking, Stephen and Leonard Mlodinow. The Grand Design. New York: Bantam Books, 2010.

Kidd, I. J. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 11-17.

McCain, Kevin and Ted Poston. “Best Explanations: An Introduction.” In Best Explanations: New Essays on Inference to the Best Explanation, edited by K. McCain and T. Poston, 1-6. Oxford: Oxford University Press, 2017.

McGlone, Matthew S. “Contextomy: The Art of Quoting out of Context.” Media, Culture & Society 27, no. 4 (2005): 511-522.

Mizrahi, Moti. “Why the Ultimate Argument for Scientific Realism Ultimately Fails.” Studies in the History and Philosophy of Science 43, no. 1 (2012): 132-138.

Mizrahi, Moti. “Why Simpler Arguments are Better.” Argumentation 30, no. 3 (2016): 247-261.

Mizrahi, Moti. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, Moti. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, Moti. “More in Defense of Weak Scientism: Another Reply to Brown.” Social

Epistemology Review and Reply Collective 7, no. 4 (2018a): 7-25.

Mizrahi, Moti. “Weak Scientism Defended Once More.” Social Epistemology Review and Reply Collective 7, no. 6 (2018b): 41-50.

Mizrahi, Moti. “The ‘Positive Argument’ for Constructive Empiricism and Inference to the Best Explanation. Journal for General Philosophy of Science (2018c):

Offit, Paul A. Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure. New York: Columbia University Press, 2008.

Paxton, Molly, Carrie Figdor, and Valerie Tiberius. “Quantifying the Gender Gap: An Empirical Study of the Underrepresentation of Women in Philosophy.” Hypatia 27, no. 4 (2012): 949-957.

Peels, Rik. “The Empirical Case Against Introspection.” Philosophical Studies 17, no. 9 (2016): 2461-2485.

Peels, Rik. “A Conceptual Map of Scientism.” In Scientism: Prospects and Problems, edited by J. De Ridder, R. Peels, and R. Van Woudenberg, 28-56. New York: Oxford University Press, 2018.

Psillos, Stathis. “The Fine Structure of Inference to the Best Explanation. Philosophy and Phenomenological Research 74, no. 2 (2007): 441-448.

Rosenberg, Alexander. The Atheist’s Guide to Reality: Enjoying Life Without Illusions. New York: W. W. Norton, 2011.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on August 13, 2018.

Schwitzgebel, Eric, Linus Ta-Lun Huang, Andrew Higgins, Ivan Gonzalez-Cabrera. “The Insularity of Anglophone Philosophy: Quantitative Analyses.” Philosophical Papers 47, no. 1 (2018): 21-48.

Sinnott-Armstrong, Walter and Robert Fogelin. Understanding Arguments. Ninth Edition. Stamford, CT: Cengage Learning, 2015.

Stenmark, Mikael. “What is Scientism?” Religious Studies 33, no. 1 (1997): 15-32.

Van Norden, Bryan. “Western Philosophy is Racist.” Aeon, October 31, 2017a. Accessed on August 12, 2018.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017b.

Van Vleet, Jacob E. Informal Logical Fallacies: A Brief Guide. Lahman, MD: University Press of America, 2011.

Walton, Douglas N. and Erik C. W. Krabbe. Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. Albany: State University of New York Press, 1995.

Weinberg, Steven. Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. New York: Random House, 1994.

Wills, Bernard. “Why Mizrahi Needs to Replace Weak Scientism With an Even Weaker Scientism.” Social Epistemology Review and Reply Collective 7, no. 5 (2018a): 18-24.

Wills, Bernard. “On the Limits of any Scientism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018b): 34-39.

[1] I would like to thank Adam Riggio for inviting me to respond to Bernard Wills’ second attack on Weak Scientism.

Author Information: Tereza Stöckelová, Institute of Sociology, Czech Academy of Sciences,

Stöckelová, Tereza. “Unspoken Complicity: Further Comments on Castellani, Pontecorvo and Valente and Rip.” Social Epistemology Review and Reply Collective 4, no. 2 (2015): 17-20.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Grufnik, via flickr

As academia changes, it is vitally important to reflect on and study these changes empirically. However, while bibliometrics, research assessment exercises and modes of publishing, more generally, constitutes a major aspect of these changes—and many of the unsettling insights offered by the paper under discussion resonate with my own research— the tendencies, as I will argue, are more differentiated, varied and ambiguous than what might be concluded from Castellani, Pontecorvo and Valente’s paper (2014). In my commentary, I will further develop selected points made by Rip (2014). These points will concern methodology, the active role of scientists in the proliferation of measurements in contemporary research systems and the situated nature of practices of valuing academic performance. I will draw upon my, and my colleagues’, research in the Czech Republic (Linková, Stöckelová 2012; Stöckelová 2012, 2014; Felt, Stöckelová 2009; Dvořáčková et al. 2014) where—similarly to Italy—bibliometrics and quantitative research evaluation have started recently to play a major role.  Continue Reading…