Archives For social science explanation

Author Information: Brian Martin, University of Wollongong, bmartin@uow.edu.au.

Martin, Brian. “Bad Social Science.” Social Epistemology Review and Reply Collective 8, no. 3 (2019): 6-16.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-47a

Image by Sanofi Pasteur via Flickr / Creative Commons

 

People untrained in social science frameworks and methods often make assumptions, observations or conclusions about the social world.[1] For example, they might say, “President Trump is a psychopath,” thereby making a judgement about Trump’s mental state. The point here is not whether this judgement is right or wrong, but whether it is based on a careful study of Trump’s thoughts and behaviour drawing on relevant expertise.

In most cases, the claim “President Trump is a psychopath” is bad psychology, in the sense that it is a conclusion reached without the application of skills in psychological diagnosis expected among professional psychologists and psychiatrists.[2] Even a non-psychologist can recognise cruder forms of bad psychology: they lack the application of standard tools in the field, such as comparison of criteria for psychopathy with Trump’s thought and behaviour.

“Bad social science” here refers to claims about society and social relationships that fall very far short of what social scientists consider good scholarship. This might be due to using false or misleading evidence, making faulty arguments, drawing unsupported conclusions or various other severe methodological, empirical or theoretical deficiencies.

In all sorts of public commentary and private conversations, examples of bad social science are legion. Instances are so common that it may seem pointless to take note of problems with ill-informed claims. However, there is value in a more systematic examination of different sorts of everyday bad social science. Such an examination can point to what is important in doing good social science and to weaknesses in assumptions, evidence and argumentation. It can also provide insights into how to defend and promote high-quality social analysis.

Here, I illustrate several facets of bad social science found in a specific public scientific controversy: the Australian vaccination debate. It is a public debate in which many partisans make claims about social dynamics, so there is ample material for analysis. In addition, because the debate is highly polarised, involves strong emotions and is extremely rancorous, it is to be expected that many deviations from calm, rational, polite discourse would be on display.

Another reason for selecting this topic is that I have been studying the debate for quite a number of years, and indeed have been drawn into the debate as a “captive of controversy.”[3] Several of the types of bad social science are found on both sides of the debate. Here, I focus mainly on pro-vaccination campaigners for reasons that will become clear.

In the following sections, I address several facets of bad social science: ad hominem attacks, not defining terms, use of limited and dubious evidence, misrepresentation, lack of reference to alternative viewpoints, lack of quality control, and drawing of unjustified conclusions. In each case, I provide examples from the Australian public vaccination debate, drawing on my experience. In a sense, selecting these topics represents an informal application of grounded theory: each of the shortcomings became evident to me through encountering numerous instances. After this, I note that there is a greater risk of deficient argumentation when defending orthodoxy.

With this background, I outline how studying bad social science can be of benefit in three ways: as a pointer to particular areas in which it is important to maintain high standards, as a toolkit for responding to attacks on social science, and as a reminder of the need to improve public understanding of social science approaches.

Ad Hominem

In the Australian vaccination debate, many partisans make adverse comments about opponents as a means of discrediting them. Social scientists recognise that ad hominem argumentation, namely attacking the person rather than dealing with what they say, is illegitimate for the purposes of making a case.

In the mid 1990s, Meryl Dorey founded the Australian Vaccination Network (AVN), which became the leading citizens’ group critical of government vaccination policy.[4] In 2009, a pro-vaccination citizens’ group called Stop the Australian Vaccination Network (SAVN) was set up with the stated aim of discrediting and shutting down the AVN.[5] SAVNers referred to Dorey with a wide range of epithets, for example “cunt.”[6]

What is interesting here is that some ad hominem attacks contain an implicit social analysis. One of them is “liar.” SAVNer Ken McLeod accused Dorey of being a liar, giving various examples.[7] However, some of these examples show only that Dorey persisted in making claims that SAVNers believed had been refuted.[8] This does not necessarily constitute lying, if lying is defined, as it often is by researchers in the area, as consciously intending to deceive.[9] To the extent that McLeod failed to relate his claims to research in the field, his application of the label “liar” constitutes bad social science.

Another term applied to vaccine critics is “babykiller.” In the Australian context, this word contains an implied social analysis, based on these premises: public questioning of vaccination policy causes some parents not to have their children vaccinated, leading to reduced vaccination rates and thence to more children dying of infectious diseases.

“Babykiller” also contains a moral judgement, namely that public critics of vaccination are culpable for the deaths of children from vaccination-preventable diseases. Few of those applying the term “babykiller” provide evidence to back up the implicit social analysis and judgement, so the label in these instances represents bad social science.

There are numerous other examples of ad hominem in the vaccination debate, on both sides. Some of them might be said to be primarily abuse, such as “cunt.” Others, though, contain an associated or implied social analysis, so to judge its quality it is necessary to assess whether the analysis conforms to conventions within social science.

Undefined terms

In social science, it is normal to define key concepts, either by explicit definitions or descriptive accounts. The point is to provide clarity when the concept is used.

One of the terms used by vaccination supporters in the Australian debate is “anti-vaxxer.” Despite the ubiquity of this term in social and mass media, I have never seen it defined. This is significant because of the considerable ambiguity involved. “Anti-vaxxer” might refer to parents who refuse all vaccines for their children and themselves, parents who have their children receive some but not all recommended vaccines, parents who express reservations about vaccination, and/or campaigners who criticise vaccination policy.

The way “anti-vaxxer” is applied in practice tends to conflate these different meanings, with the implication that any criticism of vaccination puts you in the camp of those who refuse all vaccines. The label “anti-vaxxer” has been applied to me even though I do not have a strong view about vaccination.[10]

Because of the lack of a definition or clear meaning, the term “anti-vaxxer” is a form of ad hominem and also represents bad social science. Tellingly, few social scientists studying the vaccination issue use the term descriptively.

In their publications, social scientists may not define all the terms they use because their meanings are commonly accepted in the field. Nearly always, though, some researchers pay close attention to any widely used concept.[11] When such a concept remains ill-defined, this may be a sign of bad social science — especially when it is used as a pejorative label.

Limited and Dubious Evidence

Social scientists normally seek to provide strong evidence for their claims and restrict their claims to what the evidence can support. In public debates, this caution is often disregarded.

After SAVN was formed in 2009, one of its initial claims was that the AVN believed in a global conspiracy to implant mind-control chips via vaccinations. The key piece of evidence SAVNers provided to support this claim was that Meryl Dorey had given a link to the website of David Icke, who was known to have some weird beliefs, such as that the world is ruled by shape-shifting reptilian humanoids.

The weakness of this evidence should be apparent. Just because Icke has some weird beliefs does not mean every document on his website involves adherence to weird beliefs, and just because Dorey provided a link to a document does not prove she believes in everything in the document, much less subscribes to the beliefs of the owner of the website. Furthermore, Dorey denied believing in a mind-control global conspiracy.

Finally, even if Dorey had believed in this conspiracy, this does not mean other members of the AVN, or the AVN as an organisation, believed in the conspiracy. Although the evidence was exceedingly weak, several SAVNers, after I confronted them on the matter, initially refused to back down from their claims.[12]

Misrepresentation

When studying an issue, scholars assume that evidence, sources and other material should be represented fairly. For example, a quotation from an author should fairly present the author’s views, and not be used out of context to show something different than what the author intended.

Quite a few campaigners in the Australian vaccination debate use a different approach, which might be called “gotcha”. Quotes are used to expose writers as incompetent, misguided or deluded. Views of authors are misrepresented as a means of discrediting and dismissing them.

Judy Wilyman did her PhD under my supervision and was the subject of attack for years before she graduated. On 13 January 2016, just two days after her thesis was posted online, it was the subject of a front-page story in the daily newspaper The Australian. The journalist, despite having been informed of a convenient summary of the thesis, did not mention any of its key ideas, instead claiming that it involved a conspiracy theory. Quotes from the thesis, taken out of context, were paraded as evidence of inadequacy.

This journalistic misrepresentation of Judy’s thesis was remarkably influential. It led to a cascade of hostile commentary, with hundreds of online comments on the numerous stories in The Australian, an online petition signed by thousands of people, and calls by scientists for Judy’s PhD to be revoked. In all the furore, not a single critic of her thesis posted a fair-minded summary of its contents.[13]

Alternative Viewpoints?

In high-quality social science, it is common to defend a viewpoint, but considered appropriate to examine other perspectives. Indeed, when presenting a critique, it is usual to begin with a summary of the work to be criticised.

In the Australian vaccination debate, partisans do not even attempt to present the opposing side’s viewpoint. I have never seen any campaigner provide a summary of the evidence and arguments supporting the opposition’s viewpoint. Vaccination critics present evidence and arguments that cast doubt on the government’s vaccination policy, and never try to summarise the evidence and arguments supporting it. Likewise, backers of the government’s policy never try to summarise the case against it.

There are also some intermediate viewpoints, divergent from the entrenched positions in the public debate. For example, there are some commentators who support some vaccines but not all the government-recommended ones, or who support single vaccines rather than multiple vaccines. These non-standard positions are hardly ever discussed in public by pro-vaccination campaigners.[14] More commonly, they are implicitly subsumed by the label “anti-vaxxer.”

To find summaries of arguments and evidence on both sides, it is necessary to turn to work by social scientists, and then only the few of them studying the debate without arguing for one side or the other.[15]

Quality Control

When making a claim, it makes sense to check it. Social scientists commonly do this by checking sources and/or by relying on peer review. For contemporary issues, it’s often possible to check with the person who made the claim.

In the Australian vaccination debate, there seems to be little attempt to check claims, especially when they are derogatory claims about opponents. I can speak from personal experience. Quite a number of SAVNers have made comments about my work, for example in blogs. On not a single occasion has any one of them checked with me in advance of publication.

After SAVN was formed and I started writing about free speech in the Australian vaccination debate, I sent drafts of some of my papers to SAVNers for comment. Rather than using this opportunity to send me corrections and comments, the response was to attack me, including by making complaints to my university.[16] Interestingly, the only SAVNer to have been helpful in commenting on drafts is another academic.

Another example concerns Andrew Wakefield, a gastroenterologist who was lead author of a paper in The Lancet suggesting that the possibility that the MMR triple vaccine (measles, mumps and rubella) might be linked to autism should be investigated. The paper led to a storm of media attention.

Australian pro-vaccination campaigns, and quite a few media reports, refer to Wakefield’s alleged wrongdoings, treating them as discrediting any criticism of vaccination. Incorrect statements about Wakefield are commonplace, for example that he lost his medical licence due to scientific fraud. It is a simple matter to check the facts, but apparently few do this. Even fewer take the trouble to look into the claims and counterclaims about Wakefield and qualify their statements accordingly.[17]

Drawing Conclusions

Social scientists are trained to be cautious in drawing conclusions, ensuring that they do not go beyond what can be justified from data and arguments. In addition, it is standard to include a discussion of limitations. This sort of caution is often absent in public debates.

SAVNers have claimed great success in their campaign against the AVN, giving evidence that, for example, their efforts have prevented AVN talks from being held and reduced media coverage of vaccine critics. However, although AVN operations have undoubtedly been hampered, this does not necessarily show that vaccination rates have increased or, more importantly, that public health has benefited.[18]

Defending Orthodoxy

Many social scientists undertake research in controversial areas. Some support the dominant views, some support an unorthodox position and quite a few try not to take a stand. There is no inherent problem in supporting the orthodox position, but doing so brings greater risks to the quality of research.

Many SAVNers assume that vaccination is a scientific issue and that only people with scientific credentials, for example degrees or publications in virology or epidemiology, have any credibility. This was apparent in an article by philosopher Patrick Stokes entitled “No, you’re not entitled to your opinion” that received high praise from SAVNers.[19] It was also apparent in the attack on Judy Wilyman, whose PhD was criticised because it was not in a scientific field, and because she analysed scientific claims without being a scientist. The claim that only scientists can validly criticise vaccination is easily countered.[20] The problem for SAVNers is that they are less likely to question assumptions precisely because they support the dominant viewpoint.

There is a fascinating aspect to campaigners supporting orthodoxy: they themselves frequently make claims about vaccination although they are not scientists with relevant qualifications. They do not apply their own strictures about necessary expertise to themselves. This can be explained as deriving from “honour by association,” a process parallel to guilt by association but less noticed because it is so common. In honour by association, a person gains or assumes greater credibility by being associated with a prestigious person, group or view.

Someone without special expertise who asserts a claim that supports orthodoxy implicitly takes on the mantle of the experts on the side of orthodoxy. It is only those who challenge orthodoxy who are expected to have relevant credentials. There is nothing inherently wrong with supporting the orthodox view, but it does mean there is less pressure to examine assumptions.

My initial example of bad social science was calling Donald Trump a psychopath. Suppose you said Trump has narcissistic personality disorder. This might not seem to be bad social science because it accords with the views of many psychologists. However, agreeing with orthodoxy, without accompanying deployment of expertise, does not constitute good social science any more than disagreeing with orthodoxy.

Lessons

It is all too easy to identify examples of bad social science in popular commentary. They are commonplace in political campaigning and in everyday conversations.

Being attuned to common violations of good practice has three potential benefits: as a useful reminder to maintain high standards; as a toolkit for responding to attacks on social science; and as a guide to encouraging greater public awareness of social scientific thinking and methods.

Bad Social Science as a Reminder to Maintain High Standards

Most of the kinds of bad social science prevalent in the Australian vaccination debate seldom receive extended attention in the social science literature. For example, the widely used and cited textbook Social Research Methods does not even mention ad hominem, presumably because avoiding it is so basic that it need not be discussed.

It describes five common errors in everyday thinking that social scientists should avoid: overgeneralisation, selective observation, premature closure, the halo effect and false consensus.[21] Some of these overlap with the shortcomings I’ve observed in the Australian vaccination debate. For example, the halo effect, in which prestigious sources are given more credibility, has affinities with honour by association.

The textbook The Craft of Research likewise does not mention ad hominem. In a final brief section on the ethics of research, there are a couple of points that can be applied to the vaccination debate. For example, ethical researchers “do not caricature or distort opposing views.” Another recommendation is that “When you acknowledge your readers’ alternative views, including their strongest objections and reservations,” you move towards more reliable knowledge and honour readers’ dignity.[22] Compared with the careful exposition of research methods in this and other texts, the shortcomings in public debates are seemingly so basic and obvious as to not warrant extended discussion.

No doubt many social scientists could point to the work of others in the field — or even their own — as failing to meet the highest standards. Looking at examples of bad social science can provide a reminder of what to avoid. For example, being aware of ad hominem argumentation can help in avoiding subtle denigration of authors and instead focusing entirely on their evidence and arguments. Being reminded of confirmation bias can encourage exploration of a greater diversity of viewpoints.

Malcolm Wright and Scott Armstrong examined 50 articles that cited a method in survey-based research that Armstrong had developed years earlier. They discovered that only one of the 50 studies had reported the method correctly. They recommend that researchers send drafts of their work to authors of cited studies — especially those on which the research depends most heavily — to ensure accuracy.[23] This is not a common practice in any field of scholarship but is worth considering in the interests of improving quality.

Bad Social Science as a Toolkit for Responding to Attacks

Alan Sokal wrote an intentionally incoherent article that was published in 1996 in the cultural studies journal Social Text. Numerous commentators lauded Sokal for carrying out an audacious prank that revealed the truth about cultural studies, namely that it was bunk. These commentators had not carried out relevant studies themselves, nor were most of them familiar with the field of cultural studies, including its frameworks, objects of study, methods of analysis, conclusions and exemplary pieces of scholarship.

To the extent that these commentators were uninformed about cultural studies yet willing to praise Sokal for his hoax, they were involved in a sort of bad social science. Perhaps they supported Sokal’s hoax because it agreed with their preconceived ideas, though investigation would be needed to assess this hypothesis.

Most responses to the hoax took a defensive line, for example arguing that Sokal’s conclusions were not justified. Only a few argued that interpreting the hoax as showing the vacuity of cultural studies was itself poor social science.[24] Sokal himself said it was inappropriate to draw general conclusions about cultural studies from the hoax,[25] so ironically it would have been possible to respond to attackers by quoting Sokal.

When social scientists come under attack, it can be useful to examine the evidence and methods used or cited by the attackers, and to point out, as is often the case, that they fail to measure up to standards in the field.

Encouraging Greater Public Awareness of Social Science Thinking and Methods

It is easy to communicate with like-minded scholars and commiserate about the ignorance of those who misunderstand or wilfully misrepresent social science. More challenging is to pay close attention to the characteristic ways in which people make assumptions and reason about the social world and how these ways often fall far short of the standards expected in scholarly circles.

By identifying common forms of bad social science, it may be possible to better design interventions into public discourse to encourage more rigorous thinking about evidence and argument, especially to counter spurious and ill-founded claims by partisans in public debates.

Conclusion

Social scientists, in looking at research contributions, usually focus on what is high quality: the deepest insights, the tightest arguments, the most comprehensive data, the most sophisticated analysis and the most elegant writing. This makes sense: top quality contributions offer worthwhile models to learn from and emulate.

Nevertheless, there is also a role for learning from poor quality contributions. It is instructive to look at public debates involving social issues in which people make judgements about the same sorts of matters that are investigated by social scientists, everything from criminal justice to social mores. Contributions to public debates can starkly show flaws in reasoning and the use of evidence. These flaws provide a useful reminder of things to avoid.

Observation of the Australian vaccination debate reveals several types of bad social science, including ad hominem attacks, failing to define terms, relying on dubious sources, failing to provide context, and not checking claims. The risk of succumbing to these shortcomings seems to be magnified when the orthodox viewpoint is being supported, because it is assumed to be correct and there is less likelihood of being held accountable by opponents.

There is something additional that social scientists can learn by studying contributions to public debates that have serious empirical and theoretical shortcomings. There are likely to be characteristic failures that occur repeatedly. These offer supplementary guidance for what to avoid. They also provide insight into what sort of training, for aspiring social scientists, is useful for moving from unreflective arguments to careful research.

There is also a challenge that few scholars have tackled. Given the prevalence of bad social science in many public debates, is it possible to intervene in these debates in a way that fosters greater appreciation for what is involved in good quality scholarship, and encourages campaigners to aspire to make sounder contributions?

Contact details: bmartin@uow.edu.au

References

Blume, Stuart. Immunization: How Vaccines Became Controversial. London: Reaktion Books, 2017.

Booth, Wayne C.; Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition. Chicago: University of Chicago Press, 2016.

Collier, David; Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

Ekman, Paul. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. New York: Norton, 1985.

Hilgartner, Stephen, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522.

Lee, Bandy X. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President. New York: St. Martin’s Press, 2017.

Martin, Brian; and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

Martin, Brian. “Debating vaccination: understanding the attack on the Australian Vaccination Network.” Living Wisdom, no. 8, 2011, pp. 14–40.

Martin, Brian. “On the suppression of vaccination dissent.” Science & Engineering Ethics. Vol. 21, No. 1, 2015, pp. 143–157.

Martin, Brian. Evidence-based campaigning. Archives of Public Health, 76, no. 54. (2018), https://doi.org/10.1186/s13690-018-0302-4.

Martin, Brian. Vaccination Panic in Australia. Sparsnäs, Sweden: Irene Publishing, 2018.

Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010, http://www.scribd.com/doc/47704677/Meryl-Doreys-Trouble-With-the-Truth-Part-1.

Neuman, W. Lawrence. Social Research Methods: Qualitative and Quantitative Approaches, seventh edition. Boston, MA: Pearson, 2011.

Scott, Pam; Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

Sokal, Alan D. “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22

Stokes, Patrick. “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012, https://theconversation.com/no-youre-not-entitled-to-your-opinion-9978.

Wright, Malcolm, and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008.

[1] Thanks to Meryl Dorey, Stephen Hilgartner, Larry Neuman, Alan Sokal and Malcolm Wright for valuable feedback on drafts.

[2] For informed commentary on these issues, see Bandy X. Lee, The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President (New York: St. Martin’s Press, 2017).

[3] Pam Scott, Evelleen Richards and Brian Martin, “Captives of controversy: the myth of the neutral social researcher in contemporary scientific controversies,” Science, Technology, & Human Values, Vol. 15, No. 4, Fall 1990, pp. 474–494.

[4] The AVN, forced to change its name in 2014, became the Australian Vaccination-skeptics Network. In 2018 it voluntarily changed its name to the Australian Vaccination-risks Network.

[5] In 2014, SAVN changed its name to Stop the Australian (Anti-)Vaccination Network.

[6] Brian Martin and Florencia Peña Saint Martin. El mobbing en la esfera pública: el fenómeno y sus características [Public mobbing: a phenomenon and its features]. In Norma González González (Coordinadora), Organización social del trabajo en la posmodernidad: salud mental, ambientes laborales y vida cotidiana (Guadalajara, Jalisco, México: Prometeo Editores, 2014), pp. 91-114.

[7] Ken McLeod, “Meryl Dorey’s trouble with the truth, part 1: how Meryl Dorey lies, obfuscates, prevaricates, exaggerates, confabulates and confuses in promoting her anti-vaccination agenda,” 2010, http://www.scribd.com/doc/47704677/Meryl-Doreys-Trouble-With-the-Truth-Part-1.

[8] Brian Martin, “Debating vaccination: understanding the attack on the Australian Vaccination Network,” Living Wisdom, no. 8, 2011, pp. 14–40, at pp. 28–30.

[9] E.g., Paul Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (New York: Norton, 1985).

[10] On Wikipedia I am categorised as an “anti-vaccination activist,” a term that is not defined on the entry listing those in the category. See Brian Martin, “Persistent bias on Wikipedia: methods and responses,” Social Science Computer Review, Vol. 36, No. 3, June 2018, pp. 379–388.

[11] See for example David Collier, Fernando Daniel Hidalgo and Andra Olivia Maciuceanu, “Essentially contested concepts: debates and applications,” Journal of Political Ideologies, 11(3), October 2006, pp. 211–246.

[12] Brian Martin. “Caught in the vaccination wars (part 3)”, 23 October 2012, http://www.bmartin.cc/pubs/12hpi-comments.html.

[13] The only possible exception to this statement is Michael Brull, “Anti-vaccination cranks versus academic freedom,” New Matilda, 7 February 2016, who reproduced my own summary of the key points in the thesis relevant to Australian government vaccination policy. For my responses to the attack, see http://www.bmartin.cc/pubs/controversy.html – Wilyman, for example “Defending university integrity,” International Journal for Educational Integrity, Vol. 13, No. 1, 2017, pp. 1–14.

[14] Brian Martin, Vaccination Panic in Australia (Sparsnäs, Sweden: Irene Publishing, 2018), pp. 15–24.

[15] E.g., Stuart Blume, Immunization: How Vaccines Became Controversial (London: Reaktion Books, 2017).

[16] Brian Martin. “Caught in the vaccination wars”, 28 April 2011, http://www.bmartin.cc/pubs/11savn/.

[17] For own commentary on Wakefield, see “On the suppression of vaccination dissent,” Science & Engineering Ethics, Vol. 21, No. 1, 2015, pp. 143–157.

[18] Brian Martin. Evidence-based campaigning. Archives of Public Health, Vol. 76, article 54, 2018, https://doi.org/10.1186/s13690-018-0302-4.

[19] Patrick Stokes, “No, you’re not entitled to your opinion,” The Conversation, 5 October 2012, https://theconversation.com/no-youre-not-entitled-to-your-opinion-9978.

[20] Martin, Vaccination Panic in Australia, 292–304.

[21] W. Lawrence Neuman, Social Research Methods: Qualitative and Quantitative Approaches, seventh edition (Boston, MA: Pearson, 2011), 3–5.

[22] Wayne C. Booth, Gregory G. Colomb, Joseph M. Williams, Joseph Bizup and William T. FitzGerald, The Craft of Research, fourth edition (Chicago: University of Chicago Press, 2016), 272–273.

[23] Malcolm Wright and J. Scott Armstrong, “The ombudsman: verification of citations: fawlty towers of knowledge?” Interfaces, 38 (2), March-April 2008, 125–132.

[24] For a detailed articulation of this approach, see Stephen Hilgartner, “The Sokal affair in context,” Science, Technology, & Human Values, 22(4), Autumn 1997, pp. 506–522. Hilgartner gives numerous citations to expansive interpretations of the significance of the hoax.

[25] See for example Alan D. Sokal, “What the Social Text affair does and does not prove,” in Noretta Koertge (ed.), A House Built on Sand: Exposing Postmodernist Myths about Science (New York: Oxford University Press, 1998), pp. 9–22, at p. 11: “From the mere fact of publication of my parody, I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or the cultural studies of science — much less the sociology of science — is nonsense.”

Author Information: Kamili Posey, Kingsborough College, Kamili.Posey@kbcc.cuny.edu.

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink: https://wp.me/p1Bfg0-41k

Image by Rigoberto Garcia via Flickr / Creative Commons

 

In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details: Kamili.Posey@kbcc.cuny.edu

References

Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from https://doi.org/10.31234/osf.io/dv8tu.

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author information: Moti Mizrahi, Florida Institute of Technology, mmizrahi@fit.edu

Mizrahi, Moti. “More in Defense of Weak Scientism: Another Reply to Brown.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 7-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W1

Please refer to:

Image by eltpics via Flickr / Creative Commons

 

In my (2017a), I defend a view I call Weak Scientism, which is the view that knowledge produced by scientific disciplines is better than knowledge produced by non-scientific disciplines.[1] Scientific knowledge can be said to be quantitatively better than non-scientific knowledge insofar as scientific disciplines produce more impactful knowledge–in the form of scholarly publications–than non-scientific disciplines (as measured by research output and research impact). Scientific knowledge can be said to be qualitatively better than non-scientific knowledge insofar as such knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge.

Brown (2017a) raises several objections against my defense of Weak Scientism and I have replied to his objections (Mizrahi 2017b), thereby showing again that Weak Scientism is a defensible view. Since then, Brown (2017b) has reiterated his objections in another reply on SERRC. Almost unchanged from his previous attack on Weak Scientism (Brown 2017a), Brown’s (2017b) objections are the following:

  1. Weak Scientism is not strong enough to count as scientism.
  2. Advocates of Strong Scientism should not endorse Weak Scientism.
  3. Weak Scientism does not show that philosophy is useless.
  4. My defense of Weak Scientism appeals to controversial philosophical assumptions.
  5. My defense of Weak Scientism is a philosophical argument.
  6. There is nothing wrong with persuasive definitions of scientism.

In what follows, I will respond to these objections, thereby showing once more that Weak Scientism is a defensible view. Since I have been asked to keep this as short as possible, however, I will try to focus on what I take to be new in Brown’s (2017b) latest attack on Weak Scientism.

Is Weak Scientism Strong Enough to Count as Scientism?

Brown (2017b) argues for (1) on the grounds that, on Weak Scientism, “philosophical knowledge may be nearly as valuable as scientific knowledge.” Brown (2017b, 4) goes on to characterize a view he labels “Scientism2,” which he admits is the same view as Strong Scientism, and says that “there is a huge logical gap between Strong Scientism (Scientism2) and Weak Scientism.”

As was the case the first time Brown raised this objection, it is not clear how it is supposed to show that Weak Scientism is not “really” a (weaker) version of scientism (Mizrahi 2017b, 10-11). Of course there is a logical gap between Strong Scientism and Weak Scientism; that is why I distinguish between these two epistemological views. If I am right, Strong Scientism is too strong to be a defensible version of scientism, whereas Weak Scientism is a defensible (weaker) version of scientism (Mizrahi 2017a, 353-354).

Of course Weak Scientism “leaves open the possibility that there is philosophical knowledge” (Brown 2017b, 5). If I am right, such philosophical knowledge would be inferior to scientific knowledge both quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) (Mizrahi 2017a, 358).

Brown (2017b, 5) does try to offer a reason “for thinking it strange that Weak Scientism counts as a species of scientism” in his latest attack on Weak Scientism, which does not appear in his previous attack. He invites us to imagine a theist who believes that “modern science is the greatest new intellectual achievement since the fifteenth century” (emphasis in original). Brown then claims that this theist would be an advocate of Weak Scientism because Brown (2017b, 6) takes “modern science is the greatest new intellectual achievement since the fifteenth century” to be “(roughly) equivalent to Weak Scientism.” For Brown (2017b, 6), however, “it seems odd, to say the least, that [this theist] should count as an advocate (even roughly) of scientism.”

Unfortunately, Brown’s appeal to intuition is rather difficult to evaluate because his hypothetical case is under-described.[2] First, the key phrase, namely, “modern science is the greatest new intellectual achievement since the fifteenth century,” is vague in more ways than one. I have no idea what “greatest” is supposed to mean here. Greatest in what respects? What are the other “intellectual achievements” relative to which science is said to be “the greatest”?

Also, what does “intellectual achievement” mean here? There are multiple accounts and literary traditions in history and philosophy of science, science studies, and the like on what counts as “intellectual achievements” or progress in science (Mizrahi 2013b). Without a clear understanding of what these key phrases mean here, it is difficult to tell how Brown’s intuition about this hypothetical case is supposed to be a reason to think that Weak Scientism is not “really” a (weaker) version of scientism.

Toward the end of his discussion of (1), Brown says something that suggests he actually has an issue with the word ‘scientism’. Brown (2017b, 6) writes, “perhaps Mizrahi should coin a new word for the position with respect to scientific knowledge and non-scientific forms of academic knowledge he wants to talk about” (emphasis in original). It should be clear, of course, that it does not matter what label I use for the view that “Of all the knowledge we have, scientific knowledge is the best knowledge” (Mizrahi 2017a, 354; emphasis in original). What matters is the content of the view, not the label.

Whether Brown likes the label or not, Weak Scientism is a (weaker) version of scientism because it is the view that scientific ways of knowing are superior (in certain relevant respects) to non-scientific ways of knowing, whereas Strong Scientism is the view that scientific ways of knowing are the only ways of knowing. As I have pointed out in my previous reply to Brown, whether scientific ways of knowing are superior to non-scientific ways of knowing is essentially what the scientism debate is all about (Mizrahi 2017b, 13).

Before I conclude this discussion of (1), I would like to point out that Brown seems to have misunderstood Weak Scientism. He (2017b, 3) claims that “Weak Scientism is a normative and not a descriptive claim.” This is a mistake. As a thesis (Peels 2017, 11), Weak Scientism is a descriptive claim about scientific knowledge in comparison to non-scientific knowledge. This should be clear provided that we keep in mind what it means to say that scientific knowledge is better than non-scientific knowledge. As I have argued in my (2017a), to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and that the impact of scientific knowledge is greater than that of non-scientific knowledge (as measured by research impact).

To say that scientific knowledge is qualitatively better than non-scientific knowledge is to say that scientific knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge. All these claims about the superiority of scientific knowledge to non-scientific knowledge are descriptive, not normative, claims. That is to say, Weak Scientism is the view that, as a matter of fact, knowledge produced by scientific fields of study is quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) better than knowledge produced by non-scientific fields of study.

Of course, Weak Scientism does have some normative implications. For instance, if scientific knowledge is indeed better than non-scientific knowledge, then, other things being equal, we should give more evidential weight to scientific knowledge than to non-scientific knowledge. For example, suppose that I am considering whether to vaccinate my child or not. On the one hand, I have scientific knowledge in the form of results from clinical trials according to which MMR vaccines are generally safe and effective.

On the other hand, I have knowledge in the form of stories about children who were vaccinated and then began to display symptoms of autism. If Weak Scientism is true, and I want to make a decision based on the best available information, then I should give more evidential weight to the scientific knowledge about MMR vaccines than to the anecdotal knowledge about MMR vaccines simply because the former is scientific (i.e., knowledge obtained by means of the methods of science, such as clinical trials) and the latter is not.

Should Advocates of Strong Scientism Endorse Weak Scientism?

Brown (2017b, 7) argues for (2) on the grounds that “once the advocate of Strong Scientism sees that an advocate of Weak Scientism admits the possibility that there is real knowledge other than what is produced by the natural sciences […] the advocate of Strong Scientism, at least given their philosophical presuppositions, will reject Weak Scientism out of hand.” It is not clear which “philosophical presuppositions” Brown is talking about here. Brown quotes Rosenberg (2011, 20), who claims that physics tells us what reality is like, presumably as an example of a proponent of Strong Scientism who would not endorse Weak Scientism. But it is not clear why Brown thinks that Rosenberg would “reject Weak Scientism out of hand” (Brown 2017d, 7).

Like other proponents of scientism, Rosenberg should endorse Weak Scientism because, unlike Strong Scientism, Weak Scientism is a defensible view. Insofar as we should endorse the view that has the most evidence in its favor, Weak Scientism has more going for it than Strong Scientism does. For to show that Strong Scientism is true, one would have to show that no field of study other than scientific ones can produce knowledge. Of course, that is not easy to show. To show that Weak Scientism is true, one only needs to show that the knowledge produced in scientific fields of study is better (in certain relevant respects) than the knowledge produced in non-scientific fields.

That is precisely what I show in my (2017a). I argue that the knowledge produced in scientific fields is quantitatively better than the knowledge produced in non-scientific fields because there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and the former has a greater impact than the latter (as measured by research impact). I also argue that the knowledge produced in scientific fields is qualitatively better than knowledge produced in non-scientific fields because it is more explanatorily, instrumentally, and predictively successful.

Contrary to what Brown (2017b, 7) seems to think, I do not have to show “that there is real knowledge other than scientific knowledge.” To defend Weak Scientism, all I have to show is that scientific knowledge is better (in certain relevant respects) than non-scientific knowledge. If anyone must argue for the claim that there is real knowledge other than scientific knowledge, it is Brown, for he wants to defend the value or usefulness of non-scientific knowledge, specifically, philosophical knowledge.

It is important to emphasize the point about the ways in which scientific knowledge is quantitatively and qualitatively better than non-scientific knowledge because it looks like Brown has confused the two. For he thinks that I justify my quantitative analysis of scholarly publications in scientific and non-scientific fields by “citing the precedent of epistemologists who often treat all items of knowledge as qualitatively the same” (Brown 2017b, 22; emphasis added).

Here Brown fails to carefully distinguish between my claim that scientific knowledge is quantitatively better than non-scientific knowledge and my claim that scientific knowledge is qualitatively better than non-scientific knowledge. For the purposes of a quantitative study of knowledge, information and data scientists can do precisely what epistemologists do and “abstract from various circumstances (by employing variables)” (Brown 2017b, 22) in order to determine which knowledge is quantitatively better.

How Is Weak Scientism Relevant to the Claim that Philosophy Is Useless?

Brown (2017b, 7-8) argues for (3) on the grounds that “Weak Scientism itself implies nothing about the degree to which philosophical knowledge is valuable or useful other than stating scientific knowledge is better than philosophical knowledge” (emphasis in original).

Strictly speaking, Brown is wrong about this because Weak Scientism does imply something about the degree to which scientific knowledge is better than philosophical knowledge. Recall that to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that scientific fields of study publish more research and that scientific research has greater impact than the research published in non-scientific fields of study.

Contrary to what Brown seems to think, we can say to what degree scientific research is superior to non-scientific research in terms of output and impact. That is precisely what bibliometric indicators like h-index and other metrics are for (Rousseau et al. 2018). Such bibliometric indicators allow us to say how many articles are published in a given field, how many of those published articles are cited, and how many times they are cited. For instance, according to Scimago Journal & Country Rank (2018), which contains data from the Scopus database, of the 3,815 Philosophy articles published in the United States in 2016-2017, approximately 14% are cited, and their h-index is approximately 160.

On the other hand, of the 24,378 Psychology articles published in the United States in 2016-2017, approximately 40% are cited, and their h-index is approximately 640. Contrary to what Brown seems to think, then, we can say to what degree research in Psychology is better than research in Philosophy in terms of research output (i.e., number of publications) and research impact (i.e., number of citations). We can use the same bibliometric indicators and metrics to compare research in other scientific and non-scientific fields of study.

As I have already said in my previous reply to Brown, “Weak Scientism does not entail that philosophy is useless” and “I have no interest in defending the charge that philosophy is useless” (Mizrahi 2017b, 11-12). So, I am not sure why Brown brings up (3) again. Since he insists, however, let me explain why philosophers who are concerned about the charge that philosophy is useless should engage with Weak Scientism as well.

Suppose that a foundation or agency is considering whether to give a substantial grant to one of two projects. The first project is that of a philosopher who will sit in her armchair and contemplate the nature of friendship.[3] The second project is that of a team of social scientists who will conduct a longitudinal study of the effects of friendship on human well-being (e.g., Yang et al. 2016).

If Weak Scientism is true, and the foundation or agency wants to fund the project that is likely to yield better results, then it should give the grant to the team of social scientists rather than to the armchair philosopher simply because the former’s project is scientific, whereas the latter’s is not. This is because the scientific project will more likely yield better knowledge than the non-scientific project will. In other words, unlike the project of the armchair philosopher, the scientific project will probably produce more research (i.e., more publications) that will have a greater impact (i.e., more citations) and the knowledge produced will be explanatorily, instrumentally, and predictively more successful than any knowledge that the philosopher’s project might produce.

This example should really hit home for Brown, since reading his latest attack on Weak Scientism gives one the impression that he thinks of philosophy as a personal, “self-improvement” kind of enterprise, rather than an academic discipline or field of study. For instance, he seems to be saying that philosophy is not in the business of producing “new knowledge” or making “discoveries” (Brown 2017b, 17).

Rather, Brown (2017b, 18) suggests that philosophy “is more about individual intellectual progress rather than collective intellectual progress.” Individual progress or self-improvement is great, of course, but I am not sure that it helps Brown’s case in defense of philosophy against what he sees as “the menace of scientism.” For this line of thinking simply adds fuel to the fire set by those who want to see philosophy burn. As I point out in my (2017a), scientists who dismiss philosophy do so because they find it academically useless.

For instance, Hawking and Mlodinow (2010, 5) write that ‘philosophy is dead’ because it ‘has not kept up with developments in science, particularly physics’ (emphasis added). Similarly, Weinberg (1994, 168) says that, as a working scientist, he ‘finds no help in professional philosophy’ (emphasis added). (Mizrahi 2017a, 356)

Likewise, Richard Feynman is rumored to have said that “philosophy of science is about as useful to scientists as ornithology is to birds” (Kitcher 1998, 32). It is clear, then, that what these scientists complain about is professional or academic philosophy. Accordingly, they would have no problem with anyone who wants to pursue philosophy for the sake of “individual intellectual progress.” But that is not the issue here. Rather, the issue is academic knowledge or research.

Does My Defense of Weak Scientism Appeal to Controversial Philosophical Assumptions?

Brown (2017b, 9) argues for (4) on the grounds that I assume that “we are supposed to privilege empirical (I read Mizrahi’s ‘empirical’ here as ‘experimental/scientific’) evidence over non-empirical evidence.” But that is question-begging, Brown claims, since he takes me to be assuming something like the following: “If the question of whether scientific knowledge is superior to [academic] non-scientific knowledge is a question that one can answer empirically, then, in order to pose a serious challenge to my [Mizrahi’s] defense of Weak Scientism, Brown must come up with more than mere ‘what ifs’” (Mizrahi 2017b, 10; quoted in Brown 2017b, 8).

This objection seems to involve a confusion about how defeasible reasoning and defeating evidence are supposed to work. Given that “a rebutting defeater is evidence which prevents E from justifying belief in H by supporting not-H in a more direct way” (Kelly 2016), claims about what is actual cannot be defeated by mere possibilities, since claims of the form “Possibly, p” do not prevent a piece of evidence from justifying belief in “Actually, p” by supporting “Actually, not-p” directly.

For example, the claim “Hillary Clinton could have been the 45th President of the United States” does not prevent my perceptual and testimonial evidence from justifying my belief in “Donald Trump is the 45th President of the United States,” since the former does not support “It is not the case that Donald Trump is the 45th President of the United States” in a direct way. In general, claims of the form “Possibly, p” are not rebutting defeaters against claims of the form “Actually, p.” Defeating evidence against claims of the form “Actually, p” must be about what is actual (or at least probable), not what is merely possible, in order to support “Actually, not-p” directly.

For this reason, although “the production of some sorts of non-scientific knowledge work may be harder than the production of scientific knowledge” (Brown 2017b, 19), Brown gives no reasons to think that it is actually or probably harder, which is why this possibility does nothing to undermine the claim that scientific knowledge is actually better than non-scientific knowledge. Just as it is possible that philosophical knowledge is harder to produce than scientific knowledge, it is also possible that scientific knowledge is harder to produce than philosophical knowledge. It is also possible that scientific and non-scientific knowledge are equally hard to produce.

Similarly, the possibility that “a little knowledge about the noblest things is more desirable than a lot of knowledge about less noble things” (Brown 2017b, 19), whatever “noble” is supposed to mean here, does not prevent my bibliometric evidence (in terms of research output and research impact) from justifying the belief that scientific knowledge is better than non-scientific knowledge. Just as it is possible that philosophical knowledge is “nobler” (whatever that means) than scientific knowledge, it is also possible that scientific knowledge is “nobler” than philosophical knowledge or that they are equally “noble” (Mizrahi 2017b, 9-10).

In fact, even if Brown (2017a, 47) is right that “philosophy is harder than science” and that “knowing something about human persons–particularly qua embodied rational being–is a nobler piece of knowledge than knowing something about any non-rational object” (Brown 2017b, 21), whatever “noble” is supposed to mean here, it would still be the case that scientific fields produce more knowledge (as measured by research output), and more impactful knowledge (as measured by research impact), than non-scientific disciplines.

So, I am not sure why Brown keeps insisting on mentioning these mere possibilities. He also seems to forget that the natural and social sciences study human persons as well. Even if knowledge about human persons is “nobler” (whatever that means), there is a lot of scientific knowledge about human persons coming from scientific fields, such as anthropology, biology, genetics, medical science, neuroscience, physiology, psychology, and sociology, to name just a few.

One of the alleged “controversial philosophical assumptions” that my defense of Weak Scientism rests on, and that Brown (2017a) complains about the most in his previous attack on Weak Scientism, is my characterization of philosophy as the scholarly work that professional philosophers do. In my previous reply, I argue that Brown is not in a position to complain that this is a “controversial philosophical assumption,” since he rejects my characterization of philosophy as the scholarly work that professional philosophers produce, but he does not tell us what counts as philosophical (Mizrahi 2017b, 13). Well, it turns out that Brown does not reject my characterization of philosophy after all. For, after he was challenged to say what counts as philosophical, he came up with the following “sufficient condition for pieces of writing and discourse that count as philosophy” (Brown 2017b, 11):

(P) Those articles published in philosophical journals and what academics with a Ph.D. in philosophy teach in courses at public universities with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science (Brown 2017b, 11; emphasis added).

Clearly, this is my characterization of philosophy in terms of the scholarly work that professional philosophers produce. Brown simply adds teaching to it. Since he admits that “scientists teach students too” (Brown 2017b, 18), however, it is not clear how adding teaching to my characterization of philosophy is supposed to support his attack on Weak Scientism. In fact, it may actually undermine his attack on Weak Scientism, since there is a lot more teaching going on in STEM fields than in non-STEM fields.

According to data from the National Center for Education Statistics (2017), in the 2015-16 academic year, post-secondary institutions in the United States conferred only 10,157 Bachelor’s degrees in philosophy and religious studies compared to 113,749 Bachelor’s degrees in biological and biomedical sciences, 106,850 Bachelor’s degrees in engineering, and 117,440 in psychology. In general, in the 2015-2016 academic year, 53.3% of the Bachelor’s degrees conferred by post-secondary institutions in the United States were degrees in STEM fields, whereas only 5.5% of conferred Bachelor’s degrees were in the humanities (Figure 1).

Figure 1. Bachelor’s degrees conferred by post-secondary institutions in the US, by field of study, 2015-2016 (Source: NCES)

 

Clearly, then, there is a lot more teaching going on in science than in philosophy (or even in the humanities in general), since a lot more students take science courses and graduate with degrees in scientific fields of study. So, even if Brown is right that we should include teaching in what counts as philosophy, it is still the case that scientific fields are quantitatively better than non-scientific fields.

Since Brown (2017b, 13) seems to agree that philosophy (at least in part) is the scholarly work that academic philosophers produce, it is peculiar that he complains, without argument, that “an understanding of philosophy and knowledge as operational is […] shallow insofar as philosophy and knowledge can’t fit into the narrow parameters of another empirical study.” Once Brown (2017b, 11) grants that “Those articles published in philosophical journals” count as philosophy, he thereby also grants that these journal articles can be studied empirically using the methods of bibliometrics, information science, or data science.

That is, Brown (2017b, 11) concedes that philosophy consists (at least in part) of “articles published in philosophical journals,” and so these articles can be compared to other articles published in science journals to determine research output, and they can also be compared to articles published in science journals in terms of citation counts to determine research impact. What exactly is “shallow” about that? Brown does not say.

A, perhaps unintended, consequence of Brown’s (P) is that the “great thinkers from the past” (Brown 2017b, 18), those that Brown (2017b, 13) likes to remind us “were not professional philosophers,” did not do philosophy, by Brown’s own lights. For “Socrates, Plato, Augustine, Descartes, Locke, and Hume” (Brown 2017b, 13) did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Another peculiar thing about Brown’s (P) is the restriction of the philosophical to what is being taught in public universities. What about community colleges and private universities? Is Brown suggesting that philosophy courses taught at private universities do not count as philosophy courses? This is peculiar, especially in light of the fact that, at least according to The Philosophical Gourmet Report (Brogaard and Pynes 2018), the top ranked philosophy programs in the United States are mostly located in private universities, such as New York University and Princeton University.

Is My Defense of Weak Scientism a Scientific or a Philosophical Argument?

Brown argues for (5) on the grounds that my (2017a) is published in a philosophy journal, namely, Social Epistemology, and so it a piece of philosophical knowledge by my lights, since I count as philosophy the research articles that are published in philosophy journals.

Brown would be correct about this if Social Epistemology were a philosophy journal. But it is not. Social Epistemology: A Journal of Knowledge, Culture and Policy is an interdisciplinary journal. The journal’s “aim and scope” statement makes it clear that Social Epistemology is an interdisciplinary journal:

Social Epistemology provides a forum for philosophical and social scientific enquiry that incorporates the work of scholars from a variety of disciplines who share a concern with the production, assessment and validation of knowledge. The journal covers both empirical research into the origination and transmission of knowledge and normative considerations which arise as such research is implemented, serving as a guide for directing contemporary knowledge enterprises (Social Epistemology 2018).

The fact that Social Epistemology is an interdisciplinary journal, with contributions from “Philosophers, sociologists, psychologists, cultural historians, social studies of science researchers, [and] educators” (Social Epistemology 2018) would not surprise anyone who is familiar with the history of the journal. The founding editor of the journal is Steve Fuller, who was trained in an interdisciplinary field, namely, History and Philosophy of Science (HPS), and is currently the Auguste Comte Chair in Social Epistemology in the Department of Sociology at Warwick University. Brown (2017b, 15) would surely agree that sociology is not philosophy, given that, for him, “cataloguing what a certain group of people believes is sociology and not philosophy.” The current executive editor of the journal is James H. Collier, who is a professor of Science and Technology in Society at Virginia Tech, and who was trained in Science and Technology Studies (STS), which is an interdisciplinary field as well.

Brown asserts without argument that the methods of a scientific field of study, such as sociology, are different in kind from those of philosophy: “What I contend is that […] philosophical methods are different in kind from those of the experimental scientists [sciences?]” (Brown 2017b, 24). He then goes on to speculate about what it means to say that an explanation is testable (Brown 2017b, 25). What Brown comes up with is rather unclear to me. For instance, I have no idea what it means to evaluate an explanation by inductive generalization (Brown 2017b, 25).

Instead, Brown should have consulted any one of the logic and reasoning textbooks I keep referring to in my (2017a) and (2017b) to find out that it is generally accepted among philosophers that the good-making properties of explanations, philosophical and otherwise, include testability among other good-making properties (see, e.g., Sinnott-Armstrong and Fogelin 2010, 257). As far as testability is concerned, to test an explanation or hypothesis is to determine “whether predictions that follow from it are true” (Salmon 2013, 255). In other words, “To say that a hypothesis is testable is at least to say that some prediction made on the basis of that hypothesis may confirm or disconfirm it” (Copi et al. 2011, 515).

For this reason, Feser’s analogy according to which “to compare the epistemic values of science and philosophy and fault philosophy for not being good at making testable predications [sic] is like comparing metal detectors and gardening tools and concluding gardening tools are not as good as metal detectors because gardening tools do not allow us to successfully detect for metal” (Brown 2017b, 25), which Brown likes to refer to (Brown 2017a, 48), is inapt.

It is not an apt analogy because, unlike metal detectors and gardening tools, which serve different purposes, both science and philosophy are in the business of explaining things. Indeed, Brown admits that, like good scientific explanations, “good philosophical theories explain things” (emphasis in original). In other words, Brown admits that both scientific and philosophical theories are instruments of explanation (unlike gardening and metal-detecting instruments). To provide good explanations, then, both scientific and philosophical theories must be testable (Mizrahi 2017b, 19-20).

What Is Wrong with Persuasive Definitions of Scientism?

Brown (2017b, 31) argues for (6) on the grounds that “persuasive definitions are [not] always dialectically pernicious.” He offers an argument whose conclusion is “abortion is murder” as an example of an argument for a persuasive definition of abortion. He then outlines an argument for a persuasive definition of scientism according to which “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32).

The problem, however, is that Brown is confounding arguments for a definition with the definition itself. Having an argument for a persuasive definition does not change the fact that it is a persuasive definition. To illustrate this point, let me give an example that I think Brown will appreciate. Suppose I define theism as an irrational belief in the existence of God. That is, “theism” means “an irrational belief in the existence of God.” I can also provide an argument for this definition:

P1: If it is irrational to have paradoxical beliefs and God is a paradoxical being, then theism is an irrational belief in the existence of God.

P2: It is irrational to have paradoxical beliefs and God is a paradoxical being (e.g., the omnipotence paradox).[4]

Therefore,

C: Theism is an irrational belief in the existence of God.

But surely, theists will complain that my definition of theism is a “dialectically pernicious” persuasive definition. For it stacks the deck against theists. It states that theists are already making a mistake, by definition, simply by believing in the existence of God. Even though I have provided an argument for this persuasive definition of theism, my definition is still a persuasive definition of theism, and my argument is unlikely to convince anyone who doesn’t already think that theism is irrational. Indeed, Brown (2017b, 30) himself admits that much when he says “good luck with that project!” about trying to construct a sound argument for “abortion is murder.” I take this to mean that pro-choice advocates would find his argument for “abortion is murder” dialectically inert precisely because it defines abortion in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept.

Likewise, theists would find the argument above dialectically inert precisely because it defines theism in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept. In other words, Brown seems to agree that there are good dialectical reasons to avoid appealing to persuasive definitions. Therefore, like “abortion is murder,” “theism is an irrational belief in the existence of God,” and “‘Homosexual’ means ‘one who has an unnatural desire for those of the same sex’” (Salmon 2013, 65), “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32) is a “dialectically pernicious” persuasive definition (cf. Williams 2015, 14).

Like persuasive definitions in general, it “masquerades as an honest assignment of meaning to a term while condemning or blessing with approval the subject matter of the definiendum” (Hurley 2015, 101). As I have pointed out in my (2017a), the problem with such definitions is that they “are strategies consisting in presupposing an unaccepted definition, taking a new unknowable description of meaning as if it were commonly shared” (Macagno and Walton 2014, 205).

As for Brown’s argument for the persuasive definition of Weak Scientism, according to which it “is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32), a key premise in this argument is the claim that there is a piece of philosophical knowledge that is better than scientific knowledge. This is premise 36 in Brown’s argument:

Some philosophers qua philosophers know that (a) true friendship is a necessary condition for human flourishing and (b) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for true friendship and (c) (therefore) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for human flourishing (see, e.g., the arguments in Plato’s Gorgias) and knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge (see, e.g., St. Augustine’s Confessions, book five, chapters iii and iv) [assumption]

There is a lot to unpack here, but I will focus on what I take to be the points most relevant to the scientism debate. First, Brown assumes 36 without argument, but why think it is true? In particular, why think that (a), (b), and (c) count as philosophical knowledge? Brown says that philosophers know (a), (b), and (c) in virtue of being philosophers, but he does not tell us why that is the case.

After all, accounts of friendship, with lessons about the significance of friendship, predate philosophy (see, e.g., the friendship of Gilgamesh and Enkidu in The Epic of Gilgamesh). Did it really take Plato and Augustine to tell us about the significance of friendship? In fact, on Brown’s characterization of philosophy, namely, (P), (a), (b), and (c) do not count as philosophical knowledge at all, since Plato and Augustine did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Second, some philosophers, like Epicurus, need (and think that others need) friends to flourish, whereas others, like Diogenes of Sinope, need no one. For Diogenes, friends will only interrupt his sunbathing (Arrian VII.2). My point is not simply that philosophers disagree about the value of friendship and human flourishing. Of course they disagree.[5]

Rather, my point is that, in order to establish general truths about human beings, such as “Human beings need friends to flourish,” one must employ the methods of science, such as randomization and sampling procedures, blinding protocols, methods of statistical analysis, and the like; otherwise, one would simply commit the fallacies of cherry-picking anecdotal evidence and hasty generalization (Salmon 2013, 149-151). After all, the claim “Some need friends to flourish” does not necessitate, or even make more probable, the truth of “Human beings need friends to flourish.”[6]

Third, why think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32)? Better in what sense? Quantitatively? Qualitatively? Brown does not tell us. He simply declares it “self-evident” (Brown 2017b, 32). I take it that Brown would not want to argue that “knowledge concerning the necessary conditions of human flourishing” is better than scientific knowledge in the quantitative (i.e., in terms of research output and research impact) and qualitative (i.e., in terms of explanatory, instrumental, and predictive success) respects in which scientific knowledge is better than non-scientific knowledge, according to Weak Scientism.

If so, then in what sense exactly “knowledge concerning the necessary conditions of human flourishing” (Brown 2017b, 32) is supposed to be better than scientific knowledge? Brown (2017b, 32) simply assumes that without argument and without telling us in what sense exactly “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32).

Of course, philosophy does not have a monopoly on friendship and human flourishing as research topics. Psychologists and sociologists, among other scientists, work on friendship as well (see, e.g., Hojjat and Moyer 2017). To get an idea of how much research on friendship is done in scientific fields, such as psychology and sociology, and how much is done in philosophy, we can use a database like Web of Science.

Currently (03/29/2018), there are 12,334 records in Web of Science on the topic “friendship.” Only 76 of these records (0.61%) are from the Philosophy research area. Most of the records are from the Psychology (5,331 records) and Sociology (1,111) research areas (43.22% and 9%, respectively). As we can see from Figure 2, most of the research on friendship is done in scientific fields of study, such as psychology, sociology, and other social sciences.

Figure 2. Number of records on the topic “friendship” in Web of Science by research area (Source: Web of Science)

 

In terms of research impact, too, scientific knowledge about friendship is superior to philosophical knowledge about friendship. According to Web of Science, the average citations per year for Psychology research articles on the topic of friendship is 2826.11 (h-index is 148 and the average citations per item is 28.1), and the average citations per year for Sociology research articles on the topic of friendship is 644.10 (h-index is 86 and the average citations per item is 30.15), whereas the average citations per year for Philosophy research articles on friendship is 15.02 (h-index is 13 and the average citations per item is 8.11).

Quantitatively, then, psychological and sociological knowledge on friendship is better than philosophical knowledge in terms of research output and research impact. Both Psychology and Sociology produce significantly more research on friendship than Philosophy does, and the research they produce has significantly more impact (as measured by citation counts) than philosophical research on the same topic.

Qualitatively, too, psychological and sociological knowledge about friendship is better than philosophical knowledge about friendship. For, instead of rather vague statements about how “true friendship is a necessary condition for human flourishing” (Brown 2017b, 32) that are based on mostly armchair speculation, psychological and sociological research on friendship provides detailed explanations and accurate predictions about the effects of friendship (or lack thereof) on human well-being.

For instance, numerous studies provide evidence for the effects of friendships or lack of friendships on physical well-being (see, e.g., Yang et al. 2016) as well as mental well-being (see, e.g., Cacioppo and Patrick 2008). Further studies provide explanations for the biological and genetic bases of these effects (Cole et al. 2011). This knowledge, in turn, informs interventions designed to help people deal with loneliness and social isolation (see, e.g., Masi et al. 2010).[7]

To sum up, Brown (2017b, 32) has given no reasons to think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge.” He does not even tell us what “better” is supposed to mean here. He also ignores the fact that scientific fields of study, such as psychology and sociology, produce plenty of knowledge about human flourishing, both physical and mental well-being. In fact, as we have seen, science produces a lot more knowledge about topics related to human well-being, such as friendship, than philosophy does. For this reason, Brown (2017b, 32) has failed to show that “there is non-scientific form of knowledge better than scientific knowledge.”

Conclusion

At this point, I think it is quite clear that Brown and I are talking past each other on a couple of levels. First, I follow scientists (e.g., Weinberg 1994, 166-190) and philosophers (e.g., Haack 2007, 17-18 and Peels 2016, 2462) on both sides of the scientism debate in treating philosophy as an academic discipline or field of study, whereas Brown (2017b, 18) insists on thinking about philosophy as a personal activity of “individual intellectual progress.” Second, I follow scientists (e.g., Hawking and Mlodinow 2010, 5) and philosophers (e.g., Kidd 2016, 12-13 and Rosenberg 2011, 307) on both sides of the scientism debate in thinking about knowledge as the scholarly work or research produced in scientific fields of study, such as the natural sciences, as opposed to non-scientific fields of study, such as the humanities, whereas Brown insists on thinking about philosophical knowledge as personal knowledge.

To anyone who wishes to defend philosophy’s place in research universities alongside academic disciplines, such as history, linguistics, and physics, armed with this conception of philosophy as a “self-improvement” activity, I would use Brown’s (2017b, 30) words to say, “good luck with that project!” A much more promising strategy, I propose, is for philosophy to embrace scientific ways of knowing and for philosophers to incorporate scientific methods into their research.[8]

Contact details: mmizrahi@fit.edu

References

Arrian. “The Final Phase.” In Alexander the Great: Selections from Arrian, Diodorus, Plutarch, and Quintus Curtius, edited by J. Romm, translated by P. Mensch and J. Romm, 149-172. Indianapolis, IN: Hackett Publishing Company, Inc., 2005.

Ashton, Z., and M. Mizrahi. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the “Received Wisdom” about Armchair Philosophy.” Erkenntnis (2017): DOI 10.1007/s10670-017-9904-4.

Ashton, Z., and M. Mizrahi. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018): 58-70.

Cacioppo, J. T., and W. Patrick. Loneliness: Human Nature and the Need for Social Connection. New York: W. W. Norton & Co., 2008.

Cole, S. W., L. C. Hawkley, J. M. G. Arevaldo, and J. T. Cacioppo. “Transcript Origin Analysis Identifies Antigen-Presenting Cells as Primary Targets of Socially Regulated Gene Expression in Leukocytes.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 3080-3085.

Copi, I. M., C. Cohen, and K. McMahon. Introduction to Logic. Fourteenth Edition. New York: Prentice Hall, 2011.

Brogaard, B., and C. A. Pynes (eds.). “Overall Rankings.” The Philosophical Gourmet Report. Wiley Blackwell, 2018. Available at http://34.239.13.205/index.php/overall-rankings/.

Brown, C. M. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017a): 42-54.

Brown, C. M. “Defending Some Objections to Moti Mizrahi’s Arguments Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2017b): 1-35.

Haack, S. Defending Science–within Reason: Between Scientism and Cynicism. New York: Prometheus Books, 2007.

Hawking, S., and L. Mlodinow. The Grand Design. New York: Bantam Books, 2010.

Hojjat, M., and A. Moyer (eds.). The Psychology of Friendship. New York: Oxford University Press, 2017.

Hurley, P. J. A Concise Introduction to Logic. Twelfth Edition. Stamford, CT: Cengage Learning, 2015.

Kelly, T. “Evidence.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/entries/evidence/.

Kidd, I. J. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5 (2016): 11–17.

Kitcher, P. “A Plea for Science Studies.” In A House Built on Sand: Exposing Postmodernist Myths about Science, edited by N. Koertge, 32–55. New York: Oxford University Press, 1998.

Lewis, C. S. The Four Loves. New York: Harcourt Brace & Co., 1960.

Macagno, F., and D. Walton. Emotive Language in Argumentation. New York: Cambridge University Press, 2014.

Masi, C. M., H. Chen, and L. C. Hawkley. “A Meta-Analysis of Interventions to Reduce Loneliness.” Personality and Social Psychology Review 15, no. 3 (2011): 219-266.

Mizrahi, M. “Intuition Mongering.” The Reasoner 6, no. 11 (2012): 169-170.

Mizrahi, M. “More Intuition Mongering.” The Reasoner 7, no. 1 (2013a): 5-6.

Mizrahi, M. “What is Scientific Progress? Lessons from Scientific Practice.” Journal for General Philosophy of Science 44, no. 2 (2013b): 375-390.

Mizrahi, M. “New Puzzles about Divine Attributes.” European Journal for Philosophy of Religion 5, no. 2 (2013c): 147-157.

Mizrahi, M. “The Pessimistic Induction: A Bad Argument Gone Too Far.” Synthese 190, no. 15 (2013d): 3209-3226.

Mizrahi, M. “Does the Method of Cases Rest on a Mistake?” Review of Philosophy and Psychology 5, no. 2 (2014): 183-197.

Mizrahi, M. “On Appeals to Intuition: A Reply to Muñoz-Suárez.” The Reasoner 9, no. 2 (2015a): 12-13.

Mizrahi, M. “Don’t Believe the Hype: Why Should Philosophical Theories Yield to Intuitions?” Teorema: International Journal of Philosophy 34, no. 3 (2015b): 141-158.

Mizrahi, M. “Historical Inductions: New Cherries, Same Old Cherry-Picking.” International Studies in the Philosophy of Science 29, no. 2 (2015c): 129-148.

Mizrahi, M. “Three Arguments against the Expertise Defense.” Metaphilosophy 46, no. 1 (2015d): 52-64.

Mizrahi, M. “The History of Science as a Graveyard of Theories: A Philosophers’ Myth?” International Studies in the Philosophy of Science 30, no. 3 (2016): 263-278.

Mizrahi, M. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, M. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, M. “Introduction.” In The Kuhnian Image of Science: Time for a Decisive Transformation? Edited by M. Mizrahi, 1-22. London: Rowman & Littlefield, 2017c.

National Center for Education Statistics. “Bachelor’s degrees conferred by postsecondary institutions, by field of study: Selected years, 1970-71 through 2015-16.” Digest of Education Statistics (2017). https://nces.ed.gov/programs/digest/d17/tables/dt17_322.10.asp?current=yes.

Peels, R. “The Empirical Case Against Introspection.” Philosophical Studies 17, no. 9 (2016): 2461-2485.

Peels, R. “Ten Reasons to Embrace Scientism.” Studies in History and Philosophy of Science Part A 63 (2017): 11-21.

Rosenberg, A. The Atheist’s Guide to Reality: Enjoying Life Without Illusions. New York: W. W. Norton, 2011.

Rousseau, R., L. Egghe, and R. Guns. Becoming Metric-Wise: A Bibliometric Guide for Researchers. Cambridge, MA: Elsevier, 2018.

Salmon, M. H. Introduction to Logic and Critical Thinking. Sixth Edition. Boston, MA: Wadsworth, 2013.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on April 3, 2018. http://www.scimagojr.com/mapgen.php?maptype=bc&country=US&y=citd.

Sinnott-Armstrong, W., and R. J. Fogelin. Understanding Arguments: An Introduction to Informal Logic. Eighth Edition. Belmont, CA: Wadsworth Cengage Learning, 2010.

Social Epistemology. “Aims and Scope.” Social Epistemology: A Journal of Knowledge, Culture and Policy (2018). https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=tsep20.

Weinberg, S. Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. New York: Random House, 1994.

Williams, R. N. “Introduction.” In Scientism: The New Orthodoxy, edited by R. N. Williams and D. N. Robinson, 1-22. New York: Bloomsbury Academic, 2015.

Yang, C. Y., C. Boen, K. Gerken, T. Li, K. Schorpp, and K. M. Harris. “Social Relationships and Physiological Determinants of Longevity Across the Human Life Span.” Proceedings of the National Academy of Sciences 113, no. 3 (2016): 578-583.

[1] I thank Adam Riggio for inviting me to respond to Brown’s second attack on Weak Scientism.

[2] On why appeals to intuition are bad arguments, see Mizrahi (2012), (2013a), (2014), (2015a), (2015b), and (2015d).

[3] I use friendship as an example here because Brown (2017b, 31) uses it as an example of philosophical knowledge. I will say more about that in Section 6.

[4] For more on paradoxes involving the divine attributes, see Mizrahi (2013c).

[5] “Friendship is unnecessary, like philosophy, like art, like the universe itself (for God did not need to create)” (Lewis 1960, 71).

[6] On fallacious inductive reasoning in philosophy, see Mizrahi (2013d), (2015c), (2016), and (2017c).

[7] See also “The Friendship Bench” project: https://www.friendshipbenchzimbabwe.org/.

[8] For recent examples, see Ashton and Mizrahi (2017) and (2018).

Author Information: Johannes Persson, Lund University, Johannes.Persson@fil.lu.se

Please cite as:

Persson, Johannes. 2012.Social mechanisms and explaining how: A reply to Kimberly Chuang. Social Epistemology Review and Reply Collective 1 (9): 37-41

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-qS

Please refer to:

Kimberly Chuang’s detailed and helpful reply to my article (2012a) concerns Jon Elster’s struggle to develop a mechanistic account that sheds light on explanation in social science. I argue that a problem exists with Elster’s current conception of mechanistic explanation in social contexts. Chuang (2012) defends Elster’s conception against my critique. I still believe I have identified a problem with Elster’s conception. In this reply I want to recapitulate briefly Elster’s idea, as I understand it, and then use some of Chuang’s critical points to advance the position I advocate.

1. Social explanations and Elster’s mechanistic surrogate for covering law explanations Continue Reading…

Author Information: Kimberly Chuang, University of Michigan, kimberly.chuang@gmail.com

Chuang, Kimberly. 2012. In defense of Elster’s mechanisms. Social Epistemology Review and Reply Collective 1 (9): 1-19

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-pU

Please refer to:

Abstract

Mechanisms are a species of causal explanation that apply in situations where there is unavoidable indeterminacy. In this paper, I defend Jon Elster’s account of mechanisms against Johannes Persson’s recently published critiques. Persson claims to have identified a dilemma to which Elster is committed. Elster stipulates that mechanisms must be indeterminate in either their triggering conditions or their consequences. Persson argues that we can resolve the indeterminacies in certain mechanisms. Upon doing so, we no longer, by definition, have mechanisms. At the same time, these resolved mechanisms remain only explanatorily local, and so fall short of the explanatory strength of laws. Persson concludes that by Elster’s account, eliminating the indeterminacies of mechanisms actually leaves us at an explanatory deficit: we are left with something that is no longer a mechanism, but that still falls short of law-like explanatory strength. I counter that in his argument, Persson has overlooked the distinction between improved explanatory strength in a purely local sense and improved explanatory strength in a generalized sense. In addition, Persson has also overlooked the distinction between individual instances, or applications of mechanisms, and mechanisms themselves. I conclude that Persson has not, in fact, discovered any dilemma in Elster. Persson’s argument occurs at the level of mere applications of mechanisms. His challenges to Elster pertain to improved explanatory strength in a purely local sense. What he needed to have done, in order to complete his case against Elster, was show that the alleged shortcomings of Elster’s mechanistic account occurred at the level of mechanisms themselves, and pertained to improved explanatory strength in a generalized sense.

Continue Reading…